Is there such a thing as irreducible complexity? Our current society is some orders of magnitude more complex than a hunter-gatherer society. What I mean is that the basic requirements of life: food, water, shelter; are provided by organized systems which require the labor of thousands of people and which no single person fully understands. These systems have evolved over millenia, though the complexity started increasing rapidly in the last two hundred years.

Although I said that no single person understands these systems, I think we all tend to believe that it would be possible if one tried. The systems are mostly designed to be decomposable, so that one part can change without affecting other parts, and they are mostly designed to fail safe, so that the failure of one component may cause degraded service but does not cause the system to stop working. Obviously in saying this I am using the language of programming, which is appropriate in that some computer programs are probably the most complex objects which humans have designed and built.

The significant difference, however, is that computer programs were in fact designed, and they were designed with an awareness of complexity. The systems which form our society were not designed; they evolved. Evolution can and does produce systems which are more complex than ones that humans would design. My question for today is whether evolution produces systems which are more complex than people can understand.

Our tools for understanding complexity amount to 1) building an abstract mathematical model of the system and learning how to manipulate that model; 2) breaking the system into smaller components, understanding the components individually and understanding how they fit together. Nobody has any idea how to apply a mathematical model to a biological system, but there has been considerable progress in breaking them up into components. It is still theoretically possible that some parts of a biological system are too complex to understand; perhaps the neurotransmitter system that presumably underlies consciousness can not be broken up into components in any reasonable way. However, while that is a theoretical possibility I’m not sure that many people truly believe it. Scientists do continue to make steady progress in understanding biological systems.

Biological systems are limited in their complexity by their size, by the constraints of their evolutionary history, and by the rigorous constraints of survival in an uncaring world. None of these limitations apply to systems created by humans. Human systems are limited in that any individual part must be comprehensible by a human, but the interconnection of those systems has no inherent limit in complexity.

So while it is clearly impossible for us to design a system which is too complex to understand, is it possible for one to evolve out of our society? Does the fact that economics is a failure as a predictive science suggest that that has already happened? Is it possible for us to develop more powerful tools of comprehension? If we did develop those tools, would we somehow use them to make systems still more complex?

What are the consequences of living in a system which is too complex to understand, if such a system is indeed possible? One of the lessons of state socialism was that planned economies don’t work. Was that only due to pervasive cynicism and opportunism among the bureaucrats, or was the problem really that no single person could grasp enough to plan the economy reasonably, or was the problem simply that they did not have the tools to do it? The market fundamentalists would say that a free market never fails, but anybody reading the newspaper today can see that that is not true, at least not in the short run which is what really matters for humans with a limited lifespan. I think that shows that while free market theory is a way to understand the complexity of our society, it simplifies too much, and fails to represent some significant aspects.

How do we benefit from the complexity of our society? Would it be better to make it simpler, or would we lose too much by doing so? Is it even possible to make it simpler without resorting to draconian coercion? Is there any natural limit to the complexity of society, other than the number of people involved?


  1. fche said,

    September 27, 2008 @ 7:56 am

    It seems to me a false ideal to make things “not too complex to understand”. One of the mild miracles of a free society is that there need be no single person, nor a group, who is in charge of “understanding” (or what people tend really to mean: “controlling” or “planning”). By and large, things take of themselves when free people tend to their own matters.

  2. ncm said,

    September 28, 2008 @ 6:40 pm

    A system might be so complex nobody recognizes that it is a system at all. We’ve found lots of them recently, such as the network of microfiber fungus tendrils in the forest floor, or the immune system that includes the tonsils we used to nip out at first opportunity.

    A system might be so complex nobody can recognize that it is a system at all. I can’t offer recent examples of those, but the ancient Egyptions and the not so ancient Europeans both thought the brain didn’t do much. I wonder about the Amazon jungle, and about coral reefs.

    Frank, “free people tending to their own matters” got us climatic armageddon, with up-coming mass extinction (projected 30% of extant species) and starvation of billions of us precious humans, with incineration or irradiation of a good fraction of the rest, not sorted by preparedness.

  3. ncm said,

    September 28, 2008 @ 11:32 pm

    p.s. the mass starvation and incineration might not happen, if we’re very, very careful.

  4. etbe said,

    September 29, 2008 @ 4:12 am

    You might want to read the above blog.

  5. ncm said,

    September 29, 2008 @ 6:59 am

    etbe: Anyone who lists nuclear war as low-probability may be safely ignored.

  6. Ian Lance Taylor said,

    September 29, 2008 @ 8:20 pm

    fche: Understanding is not the same as controlling or planning. If the financial firms which destroyed themselves had understood what they were doing, they would not have done it. It’s easy to conclude that they were stupid, but I think the real problem was that they did not understand the risks they were creating for themselves.

    While I certainly agree that in general things take care of themselves when people tend to their own matters, I think that the financial meltdown is just one example which shows that there are times when that is not the case. History shows us that it is foolish to assume that things always take care of themselves.

    ncm: I’m not sure I can grok a system so complex that it can not even be recognized. Your examples don’t seem to show complexity, they seem to show unexpected interactions. That’s not quite the same thing in my mind.

    etbe: Thanks for the link. I remain very skeptical about the singularity or about widespread AI or nanotechnology. I think those ideas misunderstand the nature of our society, of human intellect, and of the interrelated biological systems in which we live. On the other hand, while I’m not sure I agree with ncm that nuclear war is likely, in the sense of an exchange of nuclear attacks, I think it would be astounding if we got through the 21st century without a single nuclear attack.

  7. etbe said,

    September 30, 2008 @ 1:49 am

    Ian: Why do you assume that the financial firms did not know what they were doing, and why do you assume that if they did know then they wouldn’t have done it?

    If you have a business environment where you can run a company into the ground, get paid a lot of money to do so, and then have the government take over the bad debts then the smart (although not nice) thing to do is to run a company into the ground.

    With the way the Bush regime has been running the US with the complicity of the Democratic party (who have not even tried to impeach him) it seems reasonable to expect that the $700,000,000,000 bailout plan or something similar would happen.

    Knowing how to avoid these problems is actually quite simple, it was all figured out after the great-depression and there were a set of laws enacted to prevent such problems. Anyone who ran a bank could have said “the other banks are doing wild stuff and making some extra profits doing risky things that used to be illegal, but let’s be cautious and avoid all that”.

    Regarding nuclear war, the risk of an all-out war between countries such as the US and the former USSR seems low. But the risk of small attacks seems reasonably high (given that insane people like the leader of North Korea have nukes). It would surprise me if Washington lasts another 100 years without being nuked, but I expect that the vast majority of deaths from wars over the next 100 years will be from “conventional” weapons.

    As for the singularity etc, it seems that betting against improvements in technology is almost always going to be a losing bet. It seems to me that the issue of AI is whether you believe that there is something magical about a brain or whether you believe that any self-modifying system of comparable complexity can have emergent intelligence. Even withought AI, the possibilities for enhancing human abilities (including mental abilities) seem significant. A country that had a significant minority of the population with abilities comparable to Leonardo da Vinci would have a massive advantage over countries with conventional populations. Michael seems to be considering all the possibilities.

  8. Ian Lance Taylor said,

    September 30, 2008 @ 6:44 am

    etbe: I agree that the incentives were strongly in favor of risky behaviour which let people pocket the money today. However, I don’t think anybody expected a government bailout, certainly not to this degree. The people who had been at these financial companies their whole lives did expect them to continue to exist. Most of them are not so cynical as to destroy the firms and walk away. The leaders of companies like Bear Stearns and Lehman had been there a long time and they were baffled and angry at what happened. They weren’t acting.

    The ways to avoid these problems are indeed simple, but people didn’t understand that they had these problems. They didn’t fully understand what they were betting on. At least, that is what I think. When things started to turn sour on Wall Street last year, the mood was not “well, it was bound to catch up to us someday;” it was “what the heck is going on?”

    I agree that conventional weapons will continue to be more dangerous than nuclear weapons, but nuclear weapons have a special stigma that is all their own. The firebombing of Dresden killed a lot more people than the bombing of Hiroshima.

    If I were to pick the city most likely to be destroyed by a nuclear weapon, it would be Tel Aviv.

    I certainly believe that AI is possible and that it will come to exist at some point, though I don’t know if it will be in this century. What I don’t believe in is the singularity. Or, rather, I believe that if there is such a thing as a singularity, that it happened 100 years ago.

    I can understand how technology can give us access to a great deal more information, and I agree that that is a qualitative change. I don’t understand how that makes anybody smarter or more creative. Leonardo da Vinci had several unusual qualities, and I think is best described as an artist. Better technology doesn’t make me into an artist; knowledge is not the same as insight.

  9. ncm said,

    September 30, 2008 @ 9:25 pm

    etbe: Oh yeah? Where’s my flying car? In fairness, the moonbat might have meant that the probability of nuclear war causing H. sap. to go extinct is negligibly low.

    Ian: What is complexity except interactions? Seriously at issue is “system”: some sort of long-term, more or less stable, active arrangement. (“Solar system”, “visual system”, “financial system”) It’s one thing to be able to recognize the presence of a system when it’s pointed out, and another thing to notice it under the leaf litter, (especially when the leaf litter is the system).

    Anyway, the collapse of the banking system wasn’t because it wasn’t foreseen. It was that an individual player could had to follow the rules of the game or be ejected. Any number could be ejected without averting the event. As Doug Rushkoff pointed out, they not only foresaw the collapse, they arranged to profit from it.

  10. Ian Lance Taylor said,

    October 1, 2008 @ 6:33 am

    ncm: You’re certainly right that some systems are nonobvious. I’m just struggling to conceive of a system which is too complicated to notice. I can understand a system which is too subtle to notice, but I’m not sure I understand what it means to be too complicated to notice. To me those don’t seem like quite the same thing.

    Everybody sensible knew there was a housing bubble, but most people in finance thought it would be a soft landing. I don’t think anybody foresaw the freezing of the short-term credit market. At least, I don’t recall reading that prediction anywhere. People thought the housing bubble would deflate the way the Internet stock bubble did.

    Although actually, when I say that everybody knew there was a housing bubble, I tell a lie. Some finance companies construct models based on past behaviour, and in the past housing prices never declined across the U.S. They used that fact to model future performance, and thus to support their purchase of complex derivatives which were based on housing prices never going down. This only happened for complex derivatives, because in simple cases people said “say, that’s not true.” In complex ones they let it slide. Hence back to the original topic of this post.

    The article you cite seems to have some errors. It is not necessarily the case that when market money vanishes, somebody profits. When a stock prices drops precipitously, nobody is selling all the way down. The price starts at one point, and the next point where anybody buys may be far lower. Though of course I agree that the housing bubble would show that Alan Greenspan really didn’t know what he was doing—if we hadn’t already learned that when he supported the Bush tax cuts back in 2001.

  11. ncm said,

    October 1, 2008 @ 1:12 pm

    What I’m getting at is that a system does something. Maybe we perceive some, or even all, the pieces, and maybe notice what some of the parts are doing, but we miss what it does overall because it’s too complex or subtle to recognize as a coherent activity.

    Imagine, for instance, the computer-trading system as a whole achieving sentience, and then using details of transactions and valuations to manipulate the market and, thereby, human society. It might act to bring a monotonically increasing part of human activity under the control of futures markets. Now, omit the sentience, and have it result from subtleties in the construction of the trading apparatus. We might observe life becoming increasingly commoditized, and suspect it has a cause, but not be able to trace it to any details of the trading apparatus.

  12. Ian Lance Taylor said,

    October 1, 2008 @ 4:31 pm

    OK, that does seem conceivable. Thanks.

  13. etbe said,

    October 2, 2008 @ 2:00 pm

    Ian: You seem to think that there can only be one singularity and that the issue is when it happens or happened. It seems to me that there have been several already. I’ve written my thoughts about this at the above URL.

  14. Ian Lance Taylor said,

    October 3, 2008 @ 9:14 pm

    No, I don’t think there can be only one singularity. I just find it useful when discussing the singularity with Kurzweil fans to suggest that it has already happened. This doesn’t mean there won’t be another singularity, but it does help control people’s expectations of what it will mean.

    Thanks for the link to your essay, I mostly agree with it. I think the name “singularity” originally came from Vernor Vinge’s novel “Marooned in Realtime;” at least, there is where I first encountered it, and I don’t know of an older reference.

  15. etbe said,

    October 4, 2008 @ 4:49 am

    Ian: Trolling Kurzweil fans? 😉

  16. Ian Lance Taylor said,

    October 6, 2008 @ 9:29 pm

    One does run into them here in Silicon Valley. “We could all live forever!!” Let me get back to you on that one.

RSS feed for comments on this post · TrackBack URI

You must be logged in to post a comment.