Pages

Tuesday, December 7, 2010

What’s the “I” in “AI”?


When people ask me what I do for a living I simply say: I do “AI”, as in Artificial Intelligence. In reality I always tried to stay away from the term AI until I realized it’s the easiest explanation to describe what I do, at least in a general sense and to the layperson. I don’t like terms like AI—likewise we do not like the now out-of-fashion term “electronic brain-- because words like “intelligence” easily create false expectations and confusion.  First of all, how can we talk about Artificial Intelligence when we do not have the faintest idea what the natural one is? That vaguely reminds me Wittgenstein’s remark to a friend of his what told him she was so sick she felt “like a dog that has been run over “:  You don’t know what  a dog that has been run over feels like!”.   
                But, besides philosophical characterizations of imprecise generalizations, metaphors and analogies, after its boom in the 1970s, AI came to relate mainly to an approach to building “intelligent” machines which, in some way, mimicked what we believed the human intelligent process is. In other words, AI methods, for a long time, were based on a well defined inference process which, starting from some facts and observations, elegantly led to conclusion based on a more or less large set of rules.  The rules were typically derived and painfully coded into a computer by “expert” humans.  But, unfortunately, that process never really worked for building machines simulating basic human activities, like speaking, understanding language, and making sense of images.  
                A different approach, developed by people who humbly called themselves “engineers”—and Fred Jelinek, who sadly passed away last September, was one of them—did not have the pretense of  “replicating human brain capabilities”, but simply approached human capabilities into machines, like the recognition and understanding of speech, from a statistical point of view.  In other words, no rules compiled by experts, but machines that autonomously digest millions and millions of data samples, to then match them—in a statistical sense—to the observation of the reality and draw conclusion based on that. I belong to this school, and for a long time, like all the others in this school, I did not want to be associated with AI.
                But today, probably, it does not make much sense to make a distinction anymore. The AI discipline has assumed so many angles, so many variations that it does not characterize anymore a “way to do things”, but the final result. The term AI, which had disappeared for a decade or more—during the so called AI winter—came back, probably resuscitated by Spielberg’s movie and, more and more, laypeople associate AI to building machines that (or should we say ‘who’?) try do what humans do…more or less. That is: speak, understand, translate, and draw conclusions from data. Unfortunately there are still some who want to make that distinction when they pitch their technology and say …we use AI …which I believe is nonsense.  What’s the “I” in AI? So, yes … I work in AI…if you like that.   

2 comments:

Susan Boyce said...

Interesting post Roberto. I can't bring myself to refer to my work as AI, simply because of the way it is portayed in the media, however I do admit that I work day after day trying to make machines a bit smarter (and occasionally more human-like). If I think too hard about why that is, I could talk myself in to finding a new profession. Did you happen to catch this in the op-ed section of the Times several months back? It explores some of what I find troublesome with the term AI.

http://www.nytimes.com/2010/08/09/opinion/09lanier.html?pagewanted=2&_r=1&sq=artificial intelligence&st=cse&scp=10

Roberto Pieraccini said...

Hi Susan -- thanks for the comment, and for the link to the interesting article by Jaron Lanier. Interestingly I cited him in one of my earlier posts http://robertopieraccini.blogspot.com/search?updated-max=2010-05-18T03%3A09%3A00-04%3A00&max-results=7.

Yes, I always disliked terms like Artificial Intelligent, Neural Networks, Electronic Brain, etc. for the same reason you do. The way it has been portrayed by the media leads to some of the issue in Lanier's article.

Ciao!