Yesterday’s post was on the incoherent side, even for me. Let’s take it again, a little more slowly.
Our brains are inherently parallel processors. Our selves are composed of many impulses, actions, and proto-thoughts. However, our logical thought, the inner voice we forced to use when writing programs (or, for that matter, when writing blog entries), is a sequential series of thoughts. The ideas that inject themselves into our thoughts come from all parts of our brain, but the thoughts themselves proceed linearly. At least, that is how it is for me, and it seems it is much the same for everybody.
When our speech and our thoughts refer to an object which changes, our future references to that object refer to the changed object, not the original, unchanged, object. Referring to the original object requires awkward turns of phrase. Consider a rose. The rose is red. All of sudden, it turns blue. What do we have here? A blue rose. Notice how easy it is for me for to refer to the blue rose. Compare to thinking about the rose before it turned blue. Is that the red rose? Well, no; there is no red rose. It is the rose which was red before it turned blue.
Now, watch the rhetorical trick: I’m going to use an ad hoc just-so story with no actual evidence. It seems to me that these facts about our thought condition our programming languages. Parallel languages don’t catch on because people find it difficult to think about many things happening at once. Functional languages don’t catch on because people naturally think in terms of objects which change over time–side-effects–and find it hard to express ideas not in terms of objects which change, but in terms of layers of modifications around existing objects which do not change.
We have imperative languages because they way we have to think to program in those languages matches the way we think anyhow.
Unfortunately the technical evolution of our processors is such that, if we want to continue to take advantage of the steadily increasing performance to which we have become accustomed, we need to learn how to program them in a different way.
It seems likely to me that this different way is going to be whatever most naturally adapts imperative languages to a parallel programming model. For example, perhaps we should just start adapting the client-server model to operate within a single machine. A connection between a client and a server is easy to conceptualize as a function call. On the server side, many function calls can be handled in parallel, but this is not confusing as they have no significant interaction. Perhaps the client and server could even be different threads within the same process, but they would use a rigid API to transfer data, rather than casually using the same address space as thread currently do today (a sharing which in practice is error-prone). A DMA transfer of data between different threads in the same process could be extremely efficient.