Posted 2009-04-28 22:00:00 GMT
The fashion in computing nowadays is to assume the programmer is an idiot (this makes sense to computer scientists, an uncharacteristic demonstration of self-awareness). One example is the trend towards automatic storage management. The system is supposed to magically decide when an object is no longer needed; this is called garbage collection. The inefficiencies it introduces are large and increase with the level of hardware parallelism.
Anybody who has written a large program in an environment with an automated garbage collector has unfortunately learnt rather a lot about it. Generally the best thing to do (especially for badly tuned collectors, e.g. Ruby 1.8 or Allegro CL 8.1) is to increase the amount of space that can be allocated before garbage collection is triggered. When the program generates garbage slowly this is a good compromise on the way to the extreme of never de-allocating memory.
Before garbage became fashionable, programmers used the stack. This is an excellent idea. It is extremely efficient. It scales perfectly with more CPUs, and it works for all objects that have a dynamic extent (that don't persist after the function creating them has exited). Putting objects like these into a garbage collection algorithm is a huge waste of time. It can easily mean a 100-fold slowdown. Environments which don't allow stack allocation cannot compete.
There are however some cases where one wishes to allocate an object to be used by the caller. This is what the garbage collection is designed to deal with. It frees the programmer from having to consider the lifetime of objects. This is all very well until your program runs out of memory and you have to consider the lifetime of objects, but you can't control it.
Designers of garbage collectors generally fail to allow the programmer to give hints about the lifetime of objects. Generally, object lifetime is quite simple: for example, a common pattern is that if an instance of a class is accessible from a particular hash-table then it is alive, otherwise not. There is no way to express to any garbage collector I have ever used that it need not worry about these objects. Of course, it is easy in C++.
Garbage collection might seem convenient. If you have a lot of memory then it is perfectly reasonable to allow plenty of space to be allocated before doing any collection, so one might think there is little overhead to garbage collection. This was true in the old days, but in these days of increased hardware parallelism and true multi-threading it is not.
The real problem with garbage collection is that it is inherently unscalable. Pausing all threads while a global garbage collection occurs is generally seen as unacceptable for obvious reasons. The alternative is to run the garbage collector in one thread while other threads muck about with memory. This unfortunately forces a lot of complexity into the compiler or a lot of inefficiency into the program.
Suppose I wish to insert a new item into the middle of a linked list. I might store the old value of the next pointer of the middle node in a register, then the next pointer to the new item. Now the tail of the list is unreachable from the head — I am planning to set the next pointer of the new item to the old next pointer, but I haven't done it yet. If the garbage collector runs then it must see that I still have a reference to the old value in a register. This is impossible if it is running on another CPU. Therefore, I have to mark that I still have a reference somewhere on the stack. That means I have to worry about memory write barriers. Instead of a simple, fast procedure I will have one that spends plenty of time, bus and memory bandwidth mucking about with book-keeping.
In this case, if the new item is reachable in another way, it would be possible to re-order the operations. However, that is difficult for the compiler to see automatically in general.
It is time to think again about automatic garbage collection. Perhaps it shouldn't be the default in a multi-threaded world.
Post a comment