Designing Self-Certifying Software Systems
Large software systems are hard to understand. The size and complexity of the implementation, possibly written in a mix of programming languages; a large number of potential configurations; concurrency and distribution, all contribute to the difficulty of precisely analyzing system behavior. How can one have confidence in the correct working of such a complex system?
In this talk, I will explore an unusual approach to this challenge. Suppose that the software system is designed so that it produces a mathematical certificate to justify the correctness of its result. The behavior of such a “self-certifying” system can then be formally verified at run time, by checking the validity of each certificate as it is generated, without having to examine or reason directly about the system implementation. Self-certification thus shrinks the size of the trusted computing base, often by orders of magnitude, as only the certificate checker must be trusted.
The central research question is to design a certificate format which is comprehensive, easy to generate, and straightforward to check. I will sketch how this may be done for a variety of software system types: model checkers and static analyzers, network operating systems, and optimizing compilers. I will also discuss intriguing open questions and talk about some of the unexpected benefits of certification.