Archive for May, 2008

Make More Oil

Oil takes a long time to make, so we should start making more now. When we strip mine for coal, we will eventually want to fill those holes back in. We should fill them in with plant waste and cover with rocks. Besides starting the oil-making process, this will help sequester carbon. Later generations will thank us. (Actually I have no idea whether this would actually start making oil, but there may be something along these lines which would.)

Comments (7)

Suburban Oil

There are replacements for oil for use in cars, such as biofuels, or electricity generated by solar, wind, or nuclear power. However, these alternatives are currently more expensive. It seems to me that the only one which is likely to become cheaper is solar power, but the average home can’t collect enough solar power to drive a car a significant distance. Oil is millions of years of solar power compressed into a fluid. Nothing we know of today can equal that in terms of massive energy output for low energy input, because nothing we know of today has that initial energy investment already built in.

U.S. suburbs and exurbs were designed for cheap personal transportation. What is going to happen to them, assuming I’m right that transportation will become inexorably more expensive? Few people are going to want to move a place where transportation becomes a big part of their budget. Inevitably housing prices in the suburbs will fall. People will move to the cities, or at least to places with high-efficiency rail connections to the cities. In other words, the U.S. will start to look more like Europe.

Housing prices are falling right now in the U.S., just as gasoline prices are rising. It is possible that housing prices in the suburbs will never recover. We’ve seen that before in some cities, like Detroit. I expect that we’ll start to see blighted suburban neighborhoods–blighted in the sense of houses being abandoned as the owners can not find a buyer. It’s not all bad–at least it should increase forest cover over time, which will help somewhat with our carbon dioxide problem.

This is long-term. In the short term people will move to more fuel-efficient cars. In the long term, though, gasoline prices are going steadily upward, and while I see replacements on the horizon I don’t see them at the same price.

Comments (14)

Monotheism

When I was in grade school we were taught that monotheism was a historical advance, comparable to agriculture or other notable inventions. For example, we learned that Akenhaten was a significant figure because he was the first historical figure to advocate monotheism, although it was later repudiated by his successor Tutankhaten aka Tutankhamun aka King Tut. (Akenhaten lived about two centuries before the first historical evidence of Judaism; Freud suggested that Moses was actually a monotheistic priest during the reign of Akenhaten). Even in grade school this argument seemed vaguely suspect to me. The advantages of agriculture seem clear, the advantages of monotheism less so.

These days I do see monotheism as something of an advance. The earliest cultures we know of believed in gods who were much like people, albeit people who were both powerful and sometimes unpredictable. In some cases the gods were simply ancestors. I think this is a natural consequence of our tendency to attribute events to causes. When we want to understand the weather, our impulse is to give it a personality and motivations. It’s only a small step to think that there is a powerful person–a god–who controls the weather.

This then becomes an obstacle to actually understanding what is happening. If you already have an explanation for the weather, and your explanation inherently incorporates unpredictability, there is little purpose to looking for a deeper explanation. Since I do think that scientific thought is an advance in human culture, it follows that these early religions prevented advances.

Monotheism reduces the mass of gods to just one. This god still controls the weather, but now there is just one entity that you have to understand. It becomes possible to seriously think about god’s will and hope to reach some conclusions about it. As thinking progresses, the god becomes more abstract—created the whole world, pays attention to everything—and it becomes easier to think in terms of fixed laws rather than whims. It’s still a big step to get to science, but it’s more feasible, and monotheism may be a necessary stopping point.

I was reminded of this line of thought while reading about the Gospel of Judas. Today I don’t see how it’s possible to see Judas as anything but a patsy—hence his lyric from Jesus Christ Superstar “I only did what you wanted me to.” The Gospel of Judas doesn’t really present him that way, but it does suggest that Judas was himself a human sacrifice to Christ. This was, after all, a time when animal sacrifices to the gods were routine, though not a practice of the Christians. The Gospel of Judas was an alternate view of the Christ story, one that was suppressed by the early church as they coalesced on a single view of the religion. Ditching the Gospel of Judas was a good move, since it seems pretty complicated. Anyhow, reading about it reminded me that there is a lot of contingency in the religions that we have today. Monotheism may have been an advance in retrospect, but, unlike agriculture, it wasn’t an advance at the time. I don’t see any reason to think that things could not have gone otherwise.

Comments (3)

Layered Programming

Many programs today are written at a very high level. They are run in an interpreted environment, not a compiler. Often many different components running in different interpreted environments are hooked together. HTML and XML, for example, started out as markup languages, but now they are often also used as components of programs hooking together the output of different servers.

Computer programming has always been based on layering and abstraction. The processor abtracts the transistor, the traditional programming language abstracts the processor, the kernel abstracts the hardware. What seems fairly new to me is the speed at which these layers change and their complexity. New ideas are implemented in the form of extensive libraries. Each library can be learned in isolation, but there is no unifying principle across libraries.

It is becoming increasingly difficult to be a systems expert. When I learned to program, it was possible to understand your entire program from the source code, in whatever language, down to the machine code. When writing a modern Ajax application, that is simply impossible. There are too many different interpreters. There is too much code involved. Even fixing on a new base level above the processor–perhaps the browser–doesn’t help. This all leads to decreased performance, which is sometimes important, and decreased security, which is often important.

We can’t go back. What I wonder is whether we will again cohere to a programming model which can be understood at all relevant layers. Or whether things are just going to get increasingly complicated.

Comments (4)

Peer Review

Peer review can be a useful technique when programming. It ensures that at least one other person has read the code. It can catch dumb bugs and help ensure that the code is not unnecessarily obscure. Several popular programming methodologies use it. (Pair programming has the same benefits.)

Peer review has one obvious disadvantage: it slows down coding. In order for peer review to be meaningful, you have to present digestible chunks for review. And that
mean waiting for the review, or using some sort of patch management to permit continued coding until the review is complete and to incorporate changes suggested by the review.

I generally have not worked on project that require peer review. The gcc project requires maintainer approval of all changes, but maintainers are permitted to commit their own changes without review. I can see the advantages of a peer review system, provided there is some mechanism to ensure that reviews happen quickly. If reviews can linger, then projects can stall very quickly.

gcc has a difficult enough time getting patches reviewed as it is. It’s hard to recommend anything which would make it slower. One approach that might make it more acceptable would be to say that if a maintainer writes a patch, the peer review can be done by anybody–it would not have to be another maintainer. That is, require a reviewer for every patch, but only require that either the author or the reviewer be a maintainer.

I’m not sure whether this would be a good idea or not. It would be good to improve the quality of the gcc code base, but the quality is not so bad that drastic measures are required. Only a small additional cost would be acceptable.

Comments (2)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »