I’m currenly at the OWASP AppSec 2006 conference in Seattle, and I had the opportunity to hear Michael Howard from Microsoft talk about how they have implemented SDL for Windows Vista. This raised an interesting line of discussion – from a software security standpoint how can open source compete with proprietary vendors?
The standard argument is that open source software has the source available to all. This allows developers to review the code for security flaws at their leisure. The problem is (and this has been discussed in the past) just because people can doesn’t mean that they do. In addition, this raises the issue of how many open source developers are trained code security analysts. But let’s take a step back…
Microsoft’s SDL contains a number of controls that are used to reduce the frequency and severity of bugs in the code that Mircrosoft creates. At the outset of the process you have common security standards for designing and coding software as well as developer training in software security. Michael Howard thinks these measures give great bang for their buck and I tend to agree. From that point security issues are detected and removed from the code using a variety of processes and tools. This starts with automated checks that are run before code can be checked in. Then there are automated checks that are run against all of the code in the source repository. Down the line there are line-by-line code reviews and finally penetration tests.
Looking at the process in this way, open source software products are operating at a huge disadvantage. Proprietary vendors have the ability to define and enforce common standards and they have the ability to force developers working on their products to have a certain level of training. Open source projects can’t practically enforce these sorts of constraints. Or at least if they did then they would lose many of the benefits of being open source. The idea behind open source software is that everyone can contribute – projects try to create a worldwide user and developer base so that more work gets done in less time. If a project were to turn away help when offered or require training or certification before a user were allowed to contribute the developer base would dry up and the project founder would be stuck doing all the work. Unfortunately, that means that code that is submitted is going to be widely ranging in quality and in security.
The standard argument for the security of open source software is based on reactive measures. Having the code available for all to see allows bugs to be identified and removed. Unfortunately not injecting bugs at all is better than injecting them and trying to remove them later through inspection and this is where commercial software potentially has the advantage. In theory open source software projects could implement similar checks to what Microsoft and other proprietary software vendors do for static code analysis, security pushes before releases and periodic penetration tests. Almost no projects I know of actually do this, but they could. The problem is that by the time these measures can start to take effect security bugs have likely already been introduced due to the heterogenous developer base.
If you agree with me that tools can only go so far to provide software security and that a solution to the problem requires security knowledgeable developers then open source projects are going to be at a disadvantage until the average developer has the security training to do more good than harm. This is one area where the command and control environments available to proprietary software developers works in their favor.
dan _at_ denimgroup.com