Thoughts on Systems

Emil Sit

Apr 12, 2006 - 3 minute read - Research cryptography security tools

Automatically verifying security properties

Today a few of us had lunch with Yoshi Kohno who is visiting MIT and gave a talk about his research on Monday. An important aspect of Yoshi’s research is the problem of translating theoretical security results into secure implementations. He gave an example of how the way that WinZip employed the theoretically secure encrypt-then-MAC paradigm of authenticated encryption resulted in a system that was actually insecure. At lunch, we discussed how this problem might have been avoided: should WinZip’s consultant have looked beyond the encryption component to see how it was used? Should it have been the product manager’s responsibility?

Wouldn’t it be powerful if we could statically check that a piece of software met a particular system-wide security goal? You’d run a “compiler” on your software, along with some sort of annotation describing the security requirements; the tool would combine static analysis with knowledge of relevant theoretical results to validate the security of the program.

This sounds a lot like Coverity, a Stanford spin-off that provides tools for doing static checking of software for a variety of bugs. Coverity and its products has garnered a fair amount of press coverage as they have been auditing open source projects for various security and concurrency problems. However, while Coverity’s approach is able to find a large class of problems, they focus more on low-level problems like buffer overlows or SQL injection. Their current tools would not catch problems like the one Yoshi found in WinZip which required an understanding of the whole system.

To build this tool, more work would need to go into understanding the difficulty of precisely specifying the desired security properties and how real implementations can be parsed into a format amenable to automated analysis. Are all security properties easy to verify? We know that it is possible to check confidentiality and detecting/preventing information leaking. For example, there is on-going work at MIT, building on prior information flow work:

  • Stephen McCamant gave a talk at our group meeting today about dynamically tracking information leakage; his prototype is based on the memory checker in Valgrind and is able to estimate the number of bits of information leaked, given appropriate annotations.
  • A group of my advisor’s current and former students are working on Asbestos, an operating system that enforces information flow rules.

But what about properties like replay resistance? Or perfect forward secrecy? Or resistance to data injection? How can we test for these? Daniel Jackson’s group has developed a tool called Alloy that can help model protocols at an abstract level and find problems; he and his students have also done work on applying lightweight formal methods to find correctness and security problems in real implementations.

Coverity has shown that static analysis can be successfully realized to detect security defects in real systems in an automated fashion. Work like Yoshi’s and the projects I’ve highlighted will hopefully lead to further tools that reduce the reliance on human oversight to provide correctness and security. I wouldn’t mind working towards that goal one day.