Posted 2010-03-16 00:00:00 GMT
Every two minutes, some ignorant fanboy for a particularly opinionated
programming language will claim that it can be
fast as C or
even faster. Sadly, Common Lispers are not immune to this idiocy. And
idiocy it is. Languages inherently don't have speeds.
Firstly, when comparing two languages with the same program, the results depend on how you structure the program and the compilers or language environments you choose. You have to compare best possible programs in each language to start with, which simply invalidates most of these arguments as the C program is usually very badly written.
Secondly, performance in real programs is more about communication than computations. The claim that a language can be fast as C because it can do floating point operations on data in registers (the typical argument), misses completely that modern performance is dominated by data-locality considerations (caches, talking to other cores, etc.).
Thirdly, C compilers are generally much better at optimizing than compilers for novelty languages. Making a compiler that can produce code as good as modern GCC is pretty tricky. Many novelty languages solve this by compiling down to C code. If I remember correctly, there was a point when BitC had better scores than C on the ever changing programming language benchmark game — despite compiling down to C. Again, this does not make logical sense.
The essential reason why these arguments are ridiculous is that computers have speeds, not languages. If you have a problem and you know that it requires reading through 100TB of data, then you know that you can't do it faster than the time you need to read through 100TB. All your program can do is slow that down. Unfortunately, it is true that many programming environments do slow you down, or make it very difficult to avoid being slowed down.
Once you have decided on an algorithm, then your computer determines how fast it can possibly be, and your programming environment may restrict that more. For example, in many cases the most efficient data-structure for a problem is a bit-array. A common operation needed is to find the first set bit. X86 computers include the bsf/bsr instructions to quickly find that bit. How much trouble is it to use that instruction in your novelty programming language? In GCC, there is a semi-standard asm construct; that makes it a cinch to use an instruction like this. Under Common Lisp in SBCL, for example, you would have to mess about with the compiler and defknown.
Simply being unable to utilise the dedicated hardware (even very important things like vector instructions) is a common problem for less mainstream programming environments. But this is a distraction from the main point. Poor data layout in memory and unnecessary inter-core communication are the typical causes for inefficiency.
How much trouble is it to control how structures are laid out in memory in your chosen programming environment? The language specification probably doesn't say much about it and is therefore irrelevant. It's about the compiler/environment implementation, not the language.
Talking about the inefficiencies introduced into programs by a compiler implementation is much more interesting than talking about the (very few) cases where it does well, because that further constrains how you can structure your program. Finding the best algorithm is fantastic fun, but it's diminished by people trying to whitewash the inefficiencies introduced by their chosen programming environment.
Languages don't have speeds, computers do.
Post a comment