Opinion | A.I. Is Harder Than You Think – The New York Times

“Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.”

 

This small article is a good read, but only because it is an example of faulty reasoning about AI. Yes it is true that AI today is nowhere near allowing a free-ranging discussion with machines on philosophy or arts. But it is also true that the technology is pretty good at many important if limited tasks such as asking a voice assistant to play your favorite songs in your studio when your hands are covered with paint, or using a simple voice command to get directions when you are stuck in traffic in a strange city with rambunctious kids in the back of the car. These are things that I really need I deeply appreciate that it works! I am not particularly distressed that I am not able to discuss Plato or Kafka with Cortana.

As the authors themselves admit, while it had grand aims, the “knowledge engineering” project of the 70s failed spectacularly, while the statistical learning approach with modest aims actually delivered quite concrete results. And the phrase “Rather than merely imitating the results of our thinking …” almost sounds elitist and snobbish! If we have learnt one thing about learning, whether is for gaining artificial or natural intelligence, is that imitation is one of its biggest components. And I am not just talking about spoken discourse, I am also talking about the arts too – painting, music, sculpture. These are all aspects of intelligence too, and the ability to create and appreciate fine art is deeply contingent on imitation and training –  of the artist as well as the art connoisseur.

 

So the statement “Today’s dominant approach to A.I. has not worked out” is just factually incorrect. By every measure we are making great and rapid progress. If cognitive psychologists want to join the party, by all means come in. But know that the dominant art in the field is statistical learning, and it has by far not finished delivering yet. The field is open to new ideas, but there is no reason to throw out what is working well. I can guarantee you AI will be doing more that making restaurant reservations. In fact it already is, if you would take the trouble to find out.

A purist approach to intelligence is exactly what got AI into trouble in the 70s. Let’s not repeat that mistake again!