Some people are arguing that artificial intelligence technology has advanced so much, so quickly, that we are now close to artificial general intelligence (AGI): a computer program that has the intelligence and flexibility of a human. Popular programs like ChatGPT are examplars of what modern AI can accomplish: they can summarize text, they can answer questions, they can suggest new ideas, and, in general, they can simply talk to you like a person would. It’s easy to imagine that they will improve to the level of human in intelligence.
I’ve come to believe that this is misleading. I don’t deny that ChatGPT, and the many other AI chatbots that have been created in the last couple of years, have a form of intelligence. AlphaGo, which in 2017 defeated the number one ranked (human) Go player, also has a form of intelligence. So does Deep Blue, which defeated the number one ranked (human) chess player in 1997. And likewise for Chinook which won checkers in 1994.
Of course, those other programs are playing games, while AI chatbots are just talking. Those seem like significantly different things. But I’ve come to believe that they are actually similar, and that seeing that similarity helps to clarify the gaps that remain between chatbot intelligence and AGI.
I think it’s accurate, and useful, to see chatbots as playing a game, just like AlphaGo plays a game. Chatbots play what I call the conversation game. This is a simple game with two players who take turns. At each turn the player says something. It can be anything at all. The goal of the game is to keep the conversation going. There is no winner or loser. A game keeps going until time expires or there is nothing left to say.
Humans play this game all the time. It’s practically default human behavior: it’s what we do when we are with another person and there is nothing else to do.
Chatbots play this game quite well. They aren’t as good as a typical human, but they are very good. Playing this game well requires a lot of general knowledge and a lot of intelligence. Chatbot technology is very impressive.
However, there are a lot of things that are not required to play the conversation game. You don’t need to be able to solve problems. You don’t need to deal with the unexpected, except to say something like “What do you mean by that?” You don’t need to have a goal, except to keep playing the game. Although you need to be able to make associations of ideas, you don’t need to actually understand any of those ideas, or those associations. You just need to be able to mention them when it’s a good move in the game.
Making a better chatbot means making a better player of the conversation game. It’s possible to imagine a chatbot that plays the game better than any human. But that need not be an example of AGI.
In order to get to AGI, we need something beyond chatbots. The refinements I’ve seen published, such as reinforcement learning, just give us better chatbots. They aren’t steps toward AGI.
Of course, somebody may come up with a new idea to move toward AGI. But that will be a new idea. What I’m saying is that refining and improving the current ideas won’t get us there.
I’m aware that this critique follows a long line of critiques of AI, which is that as soon as a computer can do something, we redefine “general intelligence” as being something different from what the computer is doing. From one point of view, we are constantly moving the goalpost.
From another point of view, though, AI is investigating the nature of intelligence, an idea that we still don’t clearly understand. AI is helping to clarify what general intelligence is, by showing us what it is not. It’s not chatbots.
Leave a Reply
You must be logged in to post a comment.