Syntax vs. Semantics

Is consciousness a purely syntactic process or does it require semantics? That is another way of asking whether, if you take a snapshot of all the neurons in a human brain, and simulate it on a computer, the resulting program will be conscious. Computers are purely syntactic engines: they simply manipulate symbols. So if consciousness requires semantics, then a program running on a computer can not be conscious.

Of course, if you asked the program whether or not it was conscious, it would simulate the answer that the human would give, and would say that it was conscious. In fact, by definition, it would give all the answers that the human would give. So to argue that the program is not conscious requires believing in something very much like zombies–people who act perfectly normal but are not conscious. I think that the whole idea of zombies is incoherent, and it baffles me that some philosophers appear to take it seriously. Consciousness is not something extra that gets added on to our brains, and thus can be added to some brains but not others. Anything which acts exactly like a conscious human must be conscious.

So that would seem to settle the question. Actually, though, it doesn’t. I’ve assumed that it is possible to simulate a human on a computer. I can see two reasons that that might not be possible.

The first is that human intelligence may require quantum operations. If the human brain is a way of turning quantum effects into macroscopic effects, then it need not be possible to simulate those effects on any non-quantum computer. And while we don’t fully understand what a quantum computer is at this point, it is at least possible that it is not a purely syntactic engine. A quantum computer can, supposedly, come up with the answer to a problem without walking through all the intermediate steps. In that sense a quantum computer is not a purely syntactic engine. This is similar to the way that one can argue that the spaghetti sorter–in which you represent all the inputs as strands of spaghetti, tap them on the table, and easily pick out the largest one–is not a purely syntactic engine. The spaghetti sorter uses physics to jump immediately to the right answer. A quantum computer may do the same thing. Admittedly, arguing that this is what we want “semantics” to mean is going to be a bit of a slog. But I think it is fairly clear that it is not what we mean by “syntax”.

The second reason it may not be possible to simulate a human on a computer is that it may not be possible for human consciousness to exist separated from the world. The simulated human brain may simply be unresponsive. Arguably one could proceed the simulate the whole world around the human, or at least the perceptible part of it. But at some point I think it is reasonable to ask whether this is possible even in principle. Simulating all the neurons in a brain already sounds pretty darn hard, but one imagine simplifying to just the neurons and the neurotransmitters. Simulating the whole world sounds pretty darn hard. Can we possibly do it without requiring a computer which is as complicated as the world? And that implies, again, a quantum computer.

My bias is to believe that computers can be conscious, and to believe that the brain is purely a syntactic engine. I think that the brain dampens quantum effects rather than magnify them. But I have to admit that the alternate argument is coherent and may be the truth.


Posted

in

by

Tags:

Comments

Leave a Reply