Archive for May, 2008

Disastrous Government

The recent natural disasters in Myanmar and China have led to terrible suffering. I know that many people are doing everything they can to help the people who have been hurt. My comment is that it is clear that there were significant governmental failures in both cases. In Myanmar the government appears to be actively preventing people outside Myanmar from providing aid. In China the government appears to have failed to ensure that schools for poorer children were safe.

We had a similar governmental failure in the U.S., of course, in our response to hurricane Katrina in New Orleans. In that case the U.S. government failed to support an orderly evacuation, failed to defend against predictable storm surges, and failed to rescue people after significant parts of the city flooded.

Natural disaster prevention and response is one of the key roles of government. We can’t protect ourselves individually against extreme situations, so we form a government to do it collectively. When the government fails in that task, it has failed in one of its most vital areas.

Viewed in that light, it is shocking that there has been no accountability in the Bush administration for the Katrina disaster. I think there is one clear lesson there: if you elect somebody who does not believe in government, you will get bad government when it matters. Of course a certain degree of delayed accountability was imposed in the 2006 elections, and more may come in the 2008 elections.

I think we can safely predict that there will also be no accountability for what happened in Myanmar. Dictators do not have to answer to the people.

It will be quite interesting to see what happens in China. The Chinese government is authoritarian, but Chinese history shows us that Chinese governments must be at least somewhat responsive to the people. Will they retrench, will they offer some minor sacrifices, or will they provide some real accountability? We’ll see over the next few months.

Comments (5)

Oil Speculation

I’ve seen some arguments that the current spike in the price of oil is being driven by speculators. The general argument seems to be that people are betting that the price of oil will rise in the future, and are locking in prices now via futures contracts. They plan to sell those contracts in the future for a higher price.

This argument doesn’t make a lot of sense to me because it would be a very risky move. People are clearly starting to cut back on oil purchases. We can expect the price to fall again. That would mean that the speculators would lose their bet. I’m sure that some people would bet that way, but enough to make the price spike the way it has? I doubt it.

Another way that speculators might affect the price would if people are buying oil, or rather are receiving oil which they already paid for, and are holding on it until prices go higher. This is restricting the supply and thus driving up the price. That strategy would be more sensible. However, it would also be visible. Any interested government–and there are plenty of interested governments–should be able to track where most of the oil is going. The oil market is very visible, and oil tankers are easy to spot. If somebody were stockpiling significant amounts of oil, enough to affect the price, I think somebody would notice.

I expect that the oil price is spiking for the traditional reason: demand is growing faster than supply. Since demand is starting to drop, we can expect the price to start to drop too. In fact, oil companies expect that to happen; that’s why they are returning their windfall profits to their shareholders and executives rather than investing them in developing new sources.

For the sake of our sea levels, I hope the price doesn’t drop too far. If it plateaus at a reasonably high level, that should be an even bigger spur to investment in alternate energy sources, both from investors and from governments. We need that now.

Comments (3)

Why Blog?

An article in the New York Times Magazine this week discusses why people blog. It is by Emily Gould who used to work for Gawker. She describes herself as an “over-sharer”, and attributes her blogging, and the problems that resulted from it, to that.

There are blogs in which people mainly discuss their personal lives, although I don’t read any of them. The blogs I read are the ones which are mainly about ideas, and that is what I aspire to do with this blog. I don’t think I’ve ever been an “over-sharer.” I do believe that everybody thinks all the time–in some ways, I think that television, pulp novels, etc., are for most people a way to slow the mind down. For me this blog is a way to get that thinking down on virtual paper and out of my head. Getting ideas out of my head frees up more head space and keeps them from spinning around inside.

It helps that I enjoy writing, or at least that I want to think of myself as the sort of person who enjoys writing. That may be the only significant issue that divides people who write blogs from people who don’t: whether or not they like to write. Or, in the case of video blogs, take pictures.

Comments (1)

Indiana Jones

I’ve always mentally grouped Raiders of the Lost Ark with another early-80’s film: Buckaroo Banzai. Both films represented a new approach to action and SF films, and they were both influential. They all had precursors, of course, but they still stand out in my memory. Raiders was less ironic and self-aware than the othe, of course–in fact, Spielberg and Lucas are almost never ironic, and almost never work on any level other than the obvious one. (The people who complained about Spielberg’s film Munich before they even saw it didn’t understand this, and indeed the controversy died down quickly when the movie actually appeared.)

Raiders was also the only one to have a sequel, which was only possible because it wasn’t ironic. Buckaroo Banzai ended with what appeared to be an ad for a sequel, but it was just a joke, part of the underlying meta reference of the whole movie. That is, Raiders was a recreation of an old movie serial, but Buckaroo Banzai was both a recreation and a commentary.

Unfortunately, there is a reason we stopped watching movie serials; they don’t change. Making Raiders the first time makes a lot of sense: it’s a look back at an old form, updated for today (or 1981, anyhow). Making a sequel today only makes sense if something changes. But in the Raiders’ sequels, nothing changes. Indiana Jones picks up a family and he gets older, but he himself stays the same throughout. What’s the point? It’s fun, but it’s also repetitive. Compare to Star Wars, for example: whatever you may think of the sequels, it is undeniable that each one was different, and that the characters changed.

So it seemed to me that Indiana Jones and the Crystal Skull was more or less an exercise. There should have been more of a reason to create a new sequel twenty years later. I keep hoping it would move in some new direction, but it never did.

Another aspect I disliked about the new movie is that in Raiders, Indiana Jones was always at least on the edge of plausibility. No real human being could escape from that pit, win the fight by an airplane, jump on a horse, and get dragged by a truck, all in the space of 20 minutes. But each single event was almost plausible, and the story pulled you through them all seamlessly. In the Crystal Skull there are a couple of scenes which I found to be simply impossible, breaking the suspension of disbelief.

Obviously my thoughts are not going to affect anyone’s movie-going decisions. That said, my advice on the summer movies so far is to see Iron Man.

Comments

Multithreaded Garbage Collection

Garbage collection is the traditional solution to the problem of managing memory. Multithreaded programming is the current wave of the future. I’ve written about the difficulties of multithreaded programming before, but people are going to do it regardless. In which case: how do we garbage collection in a multithreaded program?

Let’s assume that we don’t want to halt the whole program during garbage collection, because that is expensive. In that case, it’s not too hard to understand how it can be done if you can 1) halt the whole program (other than the garbage collection thread) for a brief period; 2) any change to a heap object will put the object on a list of changed objects; and 3) you can assume that all pointer loads and stores are atomic with respect to each other. Then the garbage collection thread can halt the program while it scans the roots, let the program run while it does a mark pass, halt the program again and scan the changed objects, and let the program while it does a sweep. (This has to be an in-place garbage collector, not one that moves the valid objects).

It’s possible to implement those requirements for an interpreted language like the traditional setting for Java. You can still JIT code that uses the heap, and it will help to do some escape analysis to see whether a heap pointer can possibly escape the function.

I don’t really see how to implement those requirements for a native code language like C++. In particular tracking the changed objects seems somewhat painful. There was a garbage collection proposal for the next C++ standard, though I believe that it may have been withdrawn. But I don’t see how to implement garbage collection efficiently in a multi-threaded programming. I did some web searches, but the most helpful sounding ideas I could find were all in academic papers which weren’t online. I wonder if there are any actual implementations which try to implement my suggested requirements.

Comments (3)

« Previous entries Next Page » Next Page »