Posted 2009-04-03 03:10:38 GMT
Yesterday, I went to a talk at Ninjava by Thomas Heyman on his project to formally model security issues in program architectures. His approach was to develop a model of the program with Alloy and then relax a few implicit assumptions, for example, to allow for the fact that one component might be arbitrarily replaced by an attacker at runtime.
However, these implicit assumptions are arbitrarily identified and incomplete. A formal way of defining the threat would allow a rigorous approach to be used.
One possible formalism is to consider that different parts of the program run in different security contexts (for example, on different physical computers or with different privileges on the same computer). The question then is to see what effect a compromise of a subset of security contexts (so that those parts of the program are completely under control of the attacker) has on the overall operation of the program.
Ideally, the program should be able to detect the compromise and continue operation without any ill effects, but at least detecting the compromise and halting operation is probably acceptable in many cases.
This would require two implementations of the Alloy model for each security context (note that, for some contexts, for example, the superuser account on a networked authentication system, a compromise automatically means that other contexts are also compromised): one for the real system and one for the compromised version.
This would allow all implicit assumptions to be checked immediately without having to be enumerated, and also hold up to practical attacks.
Post a comment