Conscious Computers

The New York Times magazine last Sunday had an article on social robots: work, mostly at M.I.T., on robots which interact with humans. They interact in very very simple ways. But since we naturally impute agency to almost anything possible–e.g., the weather–it doesn’t take much for us humans to be convinced that there is really something going on inside the robot. Of course, AI researchers have known that at least since Eliza.

Can such a robot ever eventually be conscious? To a materialist like me, the answer is: of course it can. I think a more interesting question today is whether a computer can ever be conscious. The difference I’m driving at is that a robot has, by definition, a position in space, a body of some sort, and some way of interacting with the world. Can a computer program which has none of those characteristics, except for, say, text-only interaction, be conscious?

It seems at least possible that it could not. I believe (today) that consciousness is the result of the way we construct a narrative about our actions. Our consciousness then in turn informs those actions and helps us lay out future plans. I think that it doesn’t take much introspection to see that many of our actions are unconscious or preconscious. I don’t mean by that that our actions are uncaused or are somehow not done by us, or “us.” But I’m saying that the particular part of that is conscious, the “I” when we say “I think,” does not directly cause those unconscious or preconscious actions, although it does create conditions which make them more or less likely to occur.

Anyhow, a computer program with extremely limited interaction with the world has very little scope for unconscious or preconscious actions. And it similarly has a very limited ability to develop reflexes or automatic ways of handling things like walking or picking up a glass, subroutines if you will. And without that ability, it’s not obvious to me that it will develop anything like consciousness. Or at least not anything like our consciousness.

This is all pure speculation, of course, in the absence of a coherent definition of consciousness. To speculate further, what are the consequences of this for the science fiction dream of uploading personalities into computers? I think it means that for any such upload to be even remotely feasible, the upload would have to exist in a simulated world of a complexity similar to our world. And I think that our world would be extraordinarily difficult to simulate, because it is so complex. John Varley’s novel Steel Beach tries to finesse the issue by only simulating the aspects of the world which his protagonist paid attention to. But, although Varley didn’t really spell it out, that required the computer to understand the protagonist’s mind and consciousness in considerable detail.

Therefore, it seems to me that uploading personalities into a computer is not going to happen in the foreseeable future. It’s not enough to just map neurons, even if we had any idea how to do that. We have to also know what the neurons mean to the person. And we don’t even know how to start understanding it.

So if you want immortality, don’t pin your hopes on Ray Kurzweil. Biotech, perhaps some sort of genetic repair, seems to me to be a much better bet.


Posted

in

by

Tags:

Comments

6 responses to “Conscious Computers”

  1. fche Avatar

    > Anyhow, a computer program with extremely limited
    > interaction with the world has very little scope for
    > unconscious or preconscious actions.

    You realize that this sort of thinking might apply also to
    human handicapped?

  2. Ian Lance Taylor Avatar

    No, any human being, no matter how handicapped, has a complex interaction with the world. We have a physical body. We feel ourselves move, and we feel movements inside our body. We eat, which gives us taste and the sensation of digestion and repleteness, and of course, before we eat, we are hungry. Even people who appear completely nonresponsive still show reactions to various stimuli.

    A computer program has nothing like that.

  3. tromey Avatar

    I’ve never felt comfortable with the idea of uploading one’s mind into a computer as a form of immortality. To me it misses the biggest point: I still end up dying.

    “The Embodied Mind” is pretty good on consciousness. Perhaps I should re-read it.

  4. Ian Lance Taylor Avatar

    You’re right, uploading is not a form of immortality as we generally understand it. Still, if it were possible, for many people it might be better than the alternative.

    My favorite book on consciousness is “Consciousness Explained” by Daniel Dennett. He doesn’t really explain consciousness. But he does explain how most people’s notion of consciousness is incorrect.

  5. bhorev Avatar
    bhorev

    > “Anyhow, a computer program with extremely limited interaction with the world has very little scope for unconscious or preconscious actions.”

    Would you not say that pretty much everything a computer does is unconscious or preconscious, therefore giving it plenty of substance to potentially construct a narrative about those actions? (if indeed that’s the way Consciousness is made)

    Love your blog!

  6. Ian Lance Taylor Avatar

    Thanks for the note.

    Are you suggesting that a computer has significant internal complexity, and that that may be sufficient to create consciousness? That is a possibility that I hadn’t really considered. It could be so. It might turn out to be a form of consciousness very alien to our own, since it would not be tied at all to the world we live in.

Leave a Reply