
The Department of Homeland Security has been working to perform source code reviews of a number of popular open source projects. I have seen two articles on this so far. One in Information Week that is kind of negative toward the open source projects and one on ZDNet that is a little more positive.
Some of the comments to the stories have highlighted that it is impossible to compare the security of open source software without also having data about software from closed source providers. This is true. Another thing that needs to be mentioned is that automated source code analysis is one step of many required to create and maintain secure software. Just because you can pass through a scanner with flying colors doesn’t mean that your application doesn’t lack proper authorization checks, have business-logic issues with security implications, or problems in specific deployments. This ignores the importance of manual code reviews, threat modeling, training in secure design and coding techniques, fuzzing and pen testing specific deployments – all components of a well-rounded software security program.
It is fantastic that DHS is running this program to increase the security of open source software, but you have to be careful reading of too much into statistics like vulnerabilities per line of code found by scanning tools. That can lead people to jumping to unwarranted conclusions.
–Dan
dan _at_ denimgroup.com
PS – I took the photo of the cabana in Costa Rica last fall
After “Open source code, much like its commercial counterpart, tends to contain one security exposure for every 1,000 lines of code…”
all the examples of well known projects were considerably lower than that stat,
and most closed the holes as soon as identified.