AI as Search

Artificial Intelligence has always been concerned with search. One of the earliest AI programs, Newell and Simon’s so-called General Problem Solver from 1960, can now be seen as a straightforward hill-climbing search program. We might now say that most of the intelligence shown by the program was in the information setup and description. But then, it is a truism of AI that as computers learn to solve problems, we learn that those problems do not capture what we mean by intelligence.

It is interesting to observe that computers have been used to solve a wide range of problems which people consider to be difficult, from playing chess to proving theorems to managing complex supply chains. The reason we do not feel comfortable saying that computers are intelligent is that computers can not do many things which we find easy: they can’t catch a ball, they can’t engage in conversation, in general they often misunderstand us in ways that are completely inhuman and are, were they done by a human, profoundly stupid.

I think it may be possible to characterize the things which computers still do poorly in two categories. The first is reacting to events in the physical world. In us, these are skills honed by millenia of evolution. When fast reaction time is required, our conscious intelligence actually gets in the way. A skilled baseball player does not think about hitting the ball thrown by the pitcher; there isn’t time. He has trained his subconscious mind, presumably by endless repetition, to handle the task. It seems to me that for many reaction oriented tasks, animals with a natural inclination for the task can generally do better than humans, arguably because they simply think about it less. Humans can reduce or eliminate the difference through training, but animals often do better the first time out. As T.H. White said in The Sword in the Stone, humans are the unspecialized creatures. Anyhow, as far as AI is concerned, these tasks are an issue for robots, and they are very very far from being solved.

I think the second category of problems where computers do poorly are those involving context. Our conversations are full of contextual references, and even our perception is driven by our recognition of which objects can be expected in a particular context. This problem is often seen as a problem of describing the world. For example, when I referred to baseball above, you could only understand me if you know something about baseball. So for a computer to understand conversation, it needs to understand baseball.

However, I think that the real problem is not so much knowledge about the world, as the ability to identify which knowledge is relevant. We all know a remarkably vast number of facts. But we can operate intelligently because we are able to call appropriate facts to our conscious mind very very fast. I have no detailed vision of how we do that, but from an AI perspective it is a problem of search: given a situation, and given a large number of facts most of which are irrelevant, how can we quickly identify the relevant ones? This is related to the frame problem, but I think my perspective is slightly different (or my understanding of the frame problem is incomplete): I want to move the issue a little earlier, to consider how we identify the relevant information, not just the relevant results of proposed actions.

The vast amount of data in the Internet has forced us to learn a great deal about search, and clearly search technology has improved steadily. It is much easier to find relevant results than it used to be. I think it would be interesting to tie this technology back into AI. Can a conversational program search the Wikipedia to get useful context? Can we use Google’s computational needs to get some idea of how much processing must be done in the human brain? (Don’t let people tell you they know approximately how much processing the brain does. We haven’t even identified all the neurotransmitters, much less all the different ways they are used. You can’t apply engineering principles to evolution–evolution may do things which no human engineer would ever do.)

(Note that I am not talking about the kind of searching which a chess-playing program does–searching within a limited domain. I am talking about searching across all domains of knowledge, to rapidly find anything which may be relevant, sorted by relevance.)

Even some forms of creativity might be seen as applying a different criteria to the search–looking for results that are interestingly different.

It would be rather simplistic to say that intelligence is simply search. And it would be extremly optimistic to say that current Internet search programs are intelligent. But I think there is a real power in viewing intelligence as search.


Posted

in

by

Tags:

Comments

Leave a Reply