Posted 2009-05-24 07:19:00 GMT
I dislike automatic garbage collection. Though initially convenient, in a serious program, the automatic part tends to become rather irritating.
One issue is that calling a function to do exactly the same thing (or things which involve the same calculations) n times takes more than n times as much time. Repeating n times a program taking time O(1) does not take O(n) time, but something worse than that. How much worse is also difficult to calculate and depends on how you tuned the garbage collector. When the garbage collector frequently mistakenly tenures (moves out of the local scratch memory) things which have to be collected, and therefore has to do full collections, then performance is shot to hell and there is no hope, in my dire experience.
For example, consider this Common Lisp fragment (my-some is defined because the SBCL version is very slow for some reason ... to be explained later).
(defun my-some (test list) (loop for x in list thereis (funcall test x))) (defun depth-same (x) (labels ((f (x n) (typecase x (sequence (my-some (lambda (x) (f x (1+ n))) x)) (t (eql x n))))) (f x 1)))
For depth 10000 lists, half the time is spent in the garbage collector (SBCL 1.0.27 on amd64).
The garbage is generated by the closures. If they are declared dynamic-extent and so stack allocated instead of going into GC memory, as below
(defun depth-same (x) (labels ((f (x n) (labels ((n (x) (f x (1+ n)))) (declare (dynamic-extent #'n)) (typecase x (sequence (my-some #'n x)) (t (eql x n)))))) (declare (dynamic-extent #'f)) (f x 1)))
This the same speed as the function above for small inputs, but for larger inputs much faster, as it does not allocate memory from the garbage collector on SBCL.
Do you know of a better example? The programs I have run into that exhibit this behaviour tend to be much more complicated (and not open source).
Post a comment