Archive for June, 2010


I don’t have a well thought out view of Afghanistan. But General McChrystal’s counter-insurgency plan never made much sense to me. The plan by definition requires a government which the people can trust. But all reports are that Hamid Karzai is not trusted by the people in Afghanistan. The election last year was a total fiasco. That kind of seems like a big gaping hole in the middle of the counter-insurgency plan. You can’t build trust in a government whose leader stays in power by fraud. McChrystal was reportedly trying to build trust in Karzai by travelling with him and boosting his position, but since frankly I can’t see why the Afghan people would trust McChrystal or the U.S. either, that seems like a flawed plan.

So now McChrystal is out amid reports of bickering and infighting. But we’re still going to follow the same plan under General Petraeus. The basic dynamic of the situation is unchanged. How is this not going to be a disaster?

The U.S. made progress in Iraq, against my expectations, by showing that people had more to gain by participating in politics than they did by staying out. In particular, the Iraqis showed themselves what a civil war would look like, and many of them backed away. Iraq remains a long way from normal, and the former middle class remains largely outside the country, but it’s hugely better than it was four years ago.

Afghanistan is a much bigger country than Iraq with a much smaller population. The political dynamics are by necessity quite different. The political class is much smaller. I don’t see why one would expect the same process to work.

It’s also worth questioning what the U.S. has to gain from Afghanistan. Al Qaeda has relocated into Pakistan. No reasonable person would want to let the Taliban regain control, but there is no U.S. national interest in Afghanistan. There is no oil. The recently trumpeted minerals wealth has little national interest to the U.S., no slouch in mineral wealth itself. What is going to keep us there for the time it takes to turn Afghanistan into a modern society?

At this point I think the military approach is entirely wrong. I think an economic approach would be much more effective. Maybe we should try to make Kabul as secure as we can and as rich as we can, and open its gates to anybody who will enter without weapons. Hand out radios and food. Let the Taliban fight for the rest of the country, but show most of the people that a better way is available. I don’t know if this would work at all, but it would be cheaper in lives and money than the current approach.

Since we’re not going to do that, I just hope that I’m wrong again, and that something useful comes out of this, even if I can’t see what.

Comments (7)


I’ve come across a few articles recently about how modern medicine is on the road to conquer death in the next thirty years or so. I find this to be very unlikely, and I feel that people aren’t thinking about the real issues. I’ve seen two general themes. One is that the singularity will come and change everything, which is essentially unanswerable except by rolling your eyes and backing away. The other is that death is essentially a type of disease, and we will learn to cure it.

Unfortunately, death is not a disease to be cured. It’s a fundamental aspect of life. In the competition for food and other resources necessary for life, the most significant competitors of any individual organism are the other members of its own species. They are the ones who seek to occupy exactly the same niche. Complex organisms which do not die will have more size and experience than their descendants, and will therefore tend to outcompete them. It follows that species whose organisms do not die will tend to not evolve. They will over time be outcompeted by other species which do evolve. Thus death is a key evolutionary strategy for any successful species. The fact that individuals may prefer not to die is irrelevant to long term evolutionary history.

What this means is that death is a finely tuned aspect of ourselves, just as finely tuned as our rather remarkable ability to reproduce ourselves. And it’s not just an aspect of ourselves, it’s an aspect of our evolutionary forebears for eons.

It may seem superficially that humans pass through a period of childhood, then enter a phase of stasis, and then decline and die. However, in fact humans change slowly throughout their lives. Arresting the aging process would be just as complex as arresting the growth process during the teenage years. All our bodily systems are shaped by evolution to head in a particular direction. Stopping that means changing all aspects of our bodies. It would mean a person aged 20 who does not turn into a person aged 30. That means changing a hundred different aspects of how the body grows.

The fundamental argument of the people seeking to conquer death is that the body is a machine, and that we can figure out how to fix the machine so that it does not fail. However, the bodily machine was created by an evolutionary process, not by human design. Think of the ugliest least comprehensible computer program you’ve ever seen, code which is uncommented and full of cross dependencies. Think of the hacker who wrote that code–code that works but is unmaintainable. Imagine letting that hacker work on a computer program for a million years, continually micro-optimizing and never doing a comprehensive overhaul or redesign. Now you have to reverse engineer it. That’s what figuring out the human body is like. Every system in the body has deep layers of complexity and is related to other systems in strange and surprising ways. Despite all the near-miraculous advances of modern medicine, we are still only scratching the surface of understanding how the body works. Increasing computer power will help, of course, but we don’t even know the questions to ask. This is going to be a task of many generations, and even as we start to understand it will take far more work before we have any idea how to actually change anything.

Of course I could be entirely wrong, and I do think that research on aging should continue. I just don’t see any reason for optimism. A human who does not age would really be an entirely different species. What reason do we have to think that we can create such a species any time in the foreseeable future? If we could create it, what reason do we have to think that we can somehow convert ourselves?

Comments (6)

Martin Beck

Apparently the great popularity of Stieg Larsson’s novels have triggered a new interest in Swedish mystery authors. I’d like to plug the Martin Beck series by Maj Sjöwall and Per Wahlöö. It’s ten books written in the 60s and 70s.

Actually, other than being Swedish, they are entirely different from Larsson’s novels. Larsson reads like an intelligent Dan Brown with real characterization. The Beck novels are police procedurals, telling the story of solving a crime from the perspective of a policeman, Martin Beck. The novels were also intended to be an examination of Swedish society, which sounds daunting but is quite effective in practice.

The Beck novels have some extremely funny scenes, scenes which are made all the funnier by the fact that nobody in the story considers the amusing at all, and indeed they would not be funny if you were involved in them in real life. For example, the police breaking into what turns out to be a completely empty room in The Terrorists (Terroristerna), resulting through a series of completely plausible mishaps in several shootings and near fatalities.

Henning Mankell, a popular current Swedish mystery writer, is clearly strongly influenced by Sjöwall and Wahlöö. Many of Mankell’s novels are quite good, but I prefer the earlier ones.

Comments (1)

gccgo panic/recover

Back in March Go picked up a dynamic exception mechanism. Previously, when code called the panic function, it simply aborted your program. Now, it walks up the stack to the top of the currently running goroutine, running all deferred functions. More interestingly, if a deferred function calls the new recover function, the panic is interrupted, and stops walking the stack at that point. Execution then continues normally from the point where recover was called. The recover function returns the argument passed to panic, or nil if there is no panic in effect.

I just completed the implementation of this in gccgo. It turned out to be fairly complex, so I’m writing some notes here on how it works.

The language requires that panic runs the deferred functions before unwinding the stack. This means that if the deferred function calls runtime.Callers (which doesn’t work in gccgo, but never mind, it will eventually) it gets a full backtrace of where the call to panic occurred. If the language did not work that way, it would be difficult to use recover as a general error handling mechanism, because there would be no good way to dump a stack trace. Building up a stack trace through each deferred function call would be inefficient.

The language also requires that recover only return a value when it is called directly from a function run by a defer statement. Otherwise it would be difficult for a deferred function to call a function which uses panic and recover for error handling; the recover might pick up the panic for its caller, which would be confusing.

As a general gccgo principle I wanted to avoid requiring new gcc backend features. That raised some difficulty in implementing these Go language requirements. How can the recover function know whether it is being invoked directly by a function started by defer? In 6g, walking up the stack is efficient. The panic function can record its stack position, and the recover function can verify that it is at the correct distance below. In gccgo, there is no mechanism for reliably walking up the stack other than exception stack unwinding, which does not provide a helpful API. Even if it did, gccgo’s split stack code can introduce random additional stack frames which are painful to account for. And there is no good way for panic to mark the stack in gccgo.

What I did instead was have the defer statement check whether the function is it deferring might call recover (e.g., it definitely calls recover, or it is a function pointer so we don’t know). In that case, the defer statement arranges to have the deferred thunk record the return address of the deferred function at the top of the defer stack. This value is obtained via gcc’s address-of-label extension, so no new feature was required. This gives us a value which a function which calls recover can check, because a function can always reliably determine its own return address via gcc’s __builtin_return_address function.

However, if the stack is split, then __builtin_return_address will return the address of the stack splitting cleanup code rather than the real caller. To avoid that problem, a function which calls recover is split into two parts. The first part is a small thunk which is marked to not permit its stack to be split. This thunk gets its return address and checks whether it is being invoked directly from defer. It passes this as a new boolean parameter to the real function, which does permit a split stack. The real function checks the new parameter before calling recover; if it is false, it just produces a nil rather than calling recover. The real function is marked uninlinable, to ensure that it is not inlined into its only call site, which could blow out the stack.

That is sufficient to let us know whether recover should return a panic value if there is one, at the cost of having an extra thunk for every function which calls recover. Now we can look at the panic function. It walks up the defer stack, calling functions as it goes. When a function sucessfully calls recover, the panic stack is marked. This stops the calls to the deferred functions, and starts a stack unwind phase. The stack unwinding is done exactly the way that g++ handles exceptions. The g++ exception mechanism is general and cross-language, so this part was relatively easy. This means that every function that calls recover has an exception handler. The exception handlers are all the same: if this is the function in which recover returned a value, then simply return from the current function, effectively stopping the stack unwind. If this is not the function in which recover returned a value, then resume the stack unwinding, just as though the exception were rethrown in C++.

This system is somewhat baroque but it appears to be working. Everything is reasonably efficient except for a call to recover which does not return nil; that is as expensive as a C++ exception. Perhaps I will think of ways to simplify it over time.


Proposition 16

California’s proposition 16, which will be voted on next Tuesday, is an interesting use of California’s bizarre ballot initiative process. The proposition says that if a local government wants to start a municipal electrical utility, it must get a 2/3 majority of votes. The proposition was initiated and almost entirely funded by PG&E, a California electric company, which has reportedly spent over $35 million on advertising in support of the proposition. I rarely watch television, but I’ve received quite a few flyers in the mail about it. Some even list a long set of candidates and positions to endorse, along with proposition 16. However, since these flyers are required to list the groups which explicitly endorsed the flyer, it’s easy to see that the flyers are being put out for proposition 16, and they are trying to slide in support for it along with other candidates I might be inclined to vote for anyhow.

PG&E is the only company from which I can buy my electricity. Electricity distribution is a classic natural monopoly; why would two different companies put up wires to everybody’s house? Since modern life requires electricity, and since I can only get it from PG&E, I am a PG&E customer. Proposition 16 appears to be designed solely to preserve PG&E’s monopoly position, by making it much harder for communities to create their own municipal power companies. It would be hard enough to get a majority vote in favor of government run power; I think we can assume that a 2/3 majority would be effectively impossible. It’s pretty darn annoying that PG&E is spending $35 million, including money they collect from me, on this. Is that a good use of my money?

Municipal power is not impossible. For example, Palo Alto, California, uses a municipal power utility. It was notable during the rolling blackouts that affected most of California in the early 2000s that Palo Alto was immune. So it’s not as though PG&E deserves to be protected because they are doing a particularly good job. When a real crunch time came, they did a very poor job indeed.

It’s difficult for me to imagine why anybody would vote in favor of such a blatant power grab by a private company. But then, of course, there’s the $35 million. The opponents of proposition 16 have reportedly raised less than $100,000. I assume that if PG&E succeeds we will see more and more cases where private companies spend lots of money on ballot initiatives in their favor. I hope that it fails, and if I have any readers in California I encourage you to vote against proposition 16 this Tuesday.

Comments (3)

« Previous entries Next Page » Next Page »