Compressive Sensing for AI self driving cars

 

Another reason why compressive sensing can be important to AI self-driving cars is that the speed at which the torrent of data needs to be analyzed can be aided by using compressive sensing. If the AI of the self-driving car is not informed quickly enough by the sensors and sensor fusion about the state of the car, the AI might not be able to make needed decisions on a timely basis. Suppose that an image of a dog in front of the self-driving car has been received by the camera on the self-driving car, but it takes a minute for the image to be analyzed, and meanwhile the AI is allowing the self-driving car to speed forward. By the time that the sensor fusion has occurred, it might be too late for the AI self-driving car to prevent the car from hitting the dog.

Compressive Sensing for AI self driving cars

The AI community seems to be finally recognizing the immense benefits of compressive sensing for handling and featurizing very copious measurement sets. This is especially important for applications of AI that involves closing the loop in real time. Driver-less cars are obviously a great example, but there are numerous other possibilities such as voice/expression/sentiment/face recognition in the edge (as opposed to the cloud), control of any kind of real-world mission critical machinery such as rockets, optimization of power grids, network traffic routing, remote guided surgery, and so on. All these share the basic need to score and make decisions and close control loops with minimum delay. (aka High-bandwidth control).

But perhaps more subtle are the cases where compressive sensing helps to solve the problem of training itself, no matter if scoring has to be real time or not. Many AI applications stumble on the simple fact that the data is so “wide” that it leads to too many parameters in the models, leading to over-fitting. Currently data scientists solve this problem by careful “feature engineering” – i.e. choosing a small but informative set of functions of the base data and feeding that into models. Compressive sensing is an extremely elegant alternative to this manual process. Essentially what it assures us is that random-like functions (“random projections”) are as good as carefully chosen features in terms of the ultimate performance. The power of random-like projections has been known for quite a while, but the rigorous theory about perfect reconstruction from random projections was fleshed out only in the 2000-2010 decade by Candes and Tao.

Actually, perfect reconstruction is an unnecessarily hard requirement from AI perspective. All we need really is adequate retention of discriminative power (preservation of “Mutual information” between data and labels), which is much easier to satisfy. But I guess the proof that we can reconstruct original data from compressive sensing measurements has an undeniable convincing  power that helps seal the deal!

Compressive sensing has been well-known and well used in communications engineering –  for example in the guise of “Fountain codes” for data compression and channel coding, and in “Spectral signal processing”. The basic idea is very powerful and very versatile: If the random object you want to describe “adequately” has “sparsity”, which really means low entropy, then a small number of random-like functions can do that job where the number of the functions needed is of the order of the sparsity. The most simple example of this is the rather counter-intuitive idea that projection to a randomly chosen subspace in linear signal detection is as good as projection to the eigen-space (which is computational hard in high-dimensional spaces)!

Sparsity means that of all the possibilities a high dimensional signal could possibly take on, only a few are actually probable. This is the way nature typically works and it allows us to “learn and adapt” to nature. You cannot adapt to a phenomena unless there is some kind of redundancy or repeat-ability in the phenomena. A completely random phenomena is like white noise – you cannot learn from it, you cannot adapt to it. Fortunately we do not live in that kind of hyper-random world. We live in a world of modest randomness!

 

WHO warns of soaring rates of measles in Europe

WHO warns of soaring rates of measles in Europe

Every new person affected by measles in Europe reminds us that unvaccinated children and adults, regardless of where they live, remain at risk of catching the disease and spreading it to others who may not be able to get vaccinated.

 

Getting properly vaccinated is not simply a personal choice, it is also a social and moral choice. By not getting vaccinated we not only increase our own chance of infection but also the chance of infection for many others who cannot get vaccinated for various medical, economic or logistical reasons. Herd immunity develops only when a sufficiently large fraction of the population is vaccinated. Below that threshold every one is dramatically more at risk.

Finally, there is also the morality of making the choice of not vaccinating children who cannot decide for themselves. What a shame to put a child at risk of death by measles or paralysis by polio, both entirely avoidable diseases today, just because the adult in charge does not have the scientific understanding of the risks vs benefits of vaccination.

Personal choice is important and critical in a free society, but so is education about how every choice has consequences. And understanding that not doing something is as much a choice as is doing something!

Vaccination, gun control, climate change …  the statistics are screaming at us unambiguously. The human toll piles up. Our humanity implores us to act!