John Fremlin's blog: A lisp study in mathematical bugs

Posted 2013-01-21 21:30:00 GMT

On the way to yesterday's Bay Area Lisp meet-up, which was fascinating and had great talks by many speakers and a very generous giveaway of memorabilia by Paul McJones, I made a little game of balls bouncing around — when they collide the ball with the greatest mojo wins. Thanks to a couple of suggestions from Ron it turned into something quite fun.

The collisions of the balls were calculated by stepping the motion for one frame and then checking for overlap; clearly this is manifestly unfair in a plethora of circumstances. On the way back to San Francisco, I set about improving the game by solving the quadratic equation for the moment of collision of two balls moving with constant velocities (clearly it is in general quadratic as it can be formulated as a polynomial in time and there are two solution: first intersection when the balls start overlapping and final intersection when they eventually pass through each other).

I wished to return the solutions from the following expression

(solve-for time 
  (- (^2 (+ (ball-r a) (ball-r b)))
    (+ (^2 (+ (- (ball-x a) (ball-x b)) (* time (- (ball-vx a) (ball-vx b))))) 
        (^2 (+ (- (ball-y a) (ball-y b)) (* time (- (ball-vy a) (ball-vy b)))))))

Ric gave a talk at the meetup about the importance of choosing the correct point of abstraction for a software project. In this case I decided not to rearrange the equation by hand; I simply wrote out an equation for the distance between the edges of the balls, employing an abstraction in the form of the unwritten solve-for macro. To solve the quadratic one could write a numeric function like Newton-Raphson and that would be one potential implementation of the solve-for macro, but there is an elementary analytic technique called completing the squares which is preferable in the case of a quadratic. I wished to implement the rearrangement of terms automatically as it is error prone and the result is difficult to interpret or modify.

To that end I implemented the analytic solve-for on the train home. And most of the time was spent in trying to debug the essentially correct approach. My initial assessment was that the main task would be the rearrangement of forms to collect the coefficients of the powers of time in the expanded equation. This in the end went well; it is easy to test step by step after all. Where I stumbled again and again was in the reliable discovery of solutions once the coefficients had been determined.

How can this be? The formula is just (-b ± √ (b2 - 4ac))/2a for the solutions to the quadratic ax2 + bx + c = 0. The case where a is zero is handled separately; and I did it separately — when the coefficient a could be statically determined to be always 0. When it became zero because the balls were moving at the same velocity, the solver would crash.

The bug which confused me the most was that this code

(case (signnum ,b2-4ac)
  (-1 ...)
  (0 ...)
  (1 ...))

dealing with the cases of the balls touching but never overlapping, overlapping then parting, and never touching, did not function as I expected despite my testcases. It transpires to my surprise that signnum is defined to return, not a member of the set of fixnums -1, 0, 1 but these numbers in the same type as the original input which causes this case statement to fall through when passed a float. As most CPUs provide enough information on comparison to distinguish these three eventualities in a single instruction it is rather sad not to be able to exploit it idiomatically. To discover this bug I wrote an alternative to the solve-for that used a simple bisection searching for a change in sign to triangulate the source of my confusion.

Finally, having discovered all these cases, I consider that I should have abstracted out the coefficient solver; a function that takes coefficients and returns the solutions, rather than implementing it inline in the solve-for macro, which should not have expanded focused on the task of doing the rearrangement, which it achieved very successfully.

The mistake I made was in not understanding deeply enough the correct prototype or function signature for the coefficient solver function: after reflection and discussion with my friend Richard Smith, I believe it should take in as input the symbolic representation of the equation with all the constants resolved to the velocities of the balls in question, and then as output return an object which can respond to queries for the next solution at a time greater than or equal to a given time. This would allow one to handle tougher functions that may oscillate very rapidly and cross zero an infinite number of times in a finite interval.

Post a comment