John Fremlin's blog: Judging innovative software

Posted 2015-06-15 22:00:00 GMT

There's an old aphorism: execution matters more than ideas. In software I think that's very wrong — I'll elaborate but the question here is how can you evaluate an idea for a piece of software before it's implemented and in production testing? It's definitely possible to a certain extent, and this is an important skill.

Firstly, let me define what I mean by an idea. I want to differentiate between ideas and desired outcomes. An inexpensive autonomous flying car or a wonderful app that can transcribe your thoughts are both exercises in wishful thinking. They're science fiction, indubitably of immense value if they could be created, but definitely there is no clear path to an implementation. An idea in software is a method of implementation, something like trace compilation or the Bitcoin blockchain.

A software idea rarely enables some new capability. Generally there is a way to replicate the software's function in some other way, for example, by paying people to do it manually, or by constructing specialised physical machines. A software idea is about changing the balance of resources needed to achieve a capability. For example, with trace compilation, you can get the benefits of explicitly typed machine code without having to do costly static analysis. A software idea is generally about performance, albeit potentially about a huge shift in performance characteristics (e.g. enabling large-scale de-centralised trustworthy but anonymised financial transactions).

Is it worth investing the development effort in a new software idea? When you come up with a new idea, people will inevitably attack it. As Ben Horowitz says, Big companies have plenty of great ideas, but they do not innovate because they need a whole hierarchy of people to agree that a new idea is good in order to pursue it. If one smart person figures out something wrong with an idea–often to show off or to consolidate power–that’s usually enough to kill it. How can you, the inventor, yourself decide if your idea is worth investing your time in further developing, when you and others can find issues with your new scheme?

There are classes of attacks on any new idea, that are essentially more about newness rather than the idea. For example:

— it's not been done before [there's an inexhaustible supply of inertia, entropy and lethargy in the world]

— it will be hard to manage operationally [only if you for some reason deliberately choose to not develop the necessary production monitoring tools]

— it will not work in production at a specific scale - without any actual issue being identified [quite insidious, because to counter it, you'd have to develop the project sufficiently that it could be put into production]

As most new software ideas are experimented with or thought about in people's free time, and then to bring them out of the whiteboard stage a huge amount of effort is needed, these attacks can stifle a project immediately. I believe they should be disregarded as much as feasible and instead the discussion should center on the idea itself, rather than on the issue of its novelty.

A very valid reason to dismiss a project is the existence of an alternative method with better performance. Quantitative estimates are essential here. [One way to strangle a project at birth is to require such detailed projections that it must already exist before its creation can be justified.] Beyond this first order inspection, Hints for Computer System Design by Butler Lampson illustrates a series of practical considerations.

The cost of development of a system does (and should) factor very much into the decision about whether to pursue it. This is unfortunately entirely dependent on the particular people who will create it. One trick is to force very short timelines for prototypes (hackathons, etc.) but that severely constrains the scope and there is a huge natural tendency for the offspring of prototypes to be coerced into production - casting doubt on the original implementor and the idea itself. Some people can give realistic estimates of development time and others cannot; take the best guess at the distribution of development resources that will be required to achieve a specific level of benefit.

Once you've thought and fought through the above, the actual implementation might be relatively straightforward. Note that generally a new idea uses a particular resource much more heavily than it was used previously. For example, a new image processing scheme might rely on CUDA GPU computations or the SSSE3 PSHUFB instruction, where before only the scalar CPU instruction set was used. This will inevitably cause unexpected interactions when deployed at scale by changing the system's characteristics (in this case for example by drawing more electrical power). The ability to handle these issues is a reflection of the degree of technical stagnation the wider system already faces (e.g. aging compilers, fixed JVM versions, etc.) and generally the necessary fixes will benefit even the old system. That sometimes makes the arguments about them easier to overcome.

Programming the actual implementation is relatively trivial once the broader picture has been set. The quality of the implementation should be easy to measure given the discussions around the quantification of the benefit of the new approach, and once measured things naturally improve - lighting the path is harder than following it, and ideas themselves definitely have a social value beyond their first implementation.

Post a comment