Many thanks to Jeremiah Grossman for ruining my afternoon yesterday – his blog post about secure code being cheaper to develop got me thinking. And since I can’t think without a whiteboard I got to drawing. Below are the half-baked ideas that came out of that session.
So the myth we’re trying to bust (love the RSA Mythbusters reference, by the way) is “Secure code is less expensive to develop”
The first scenario I thought through was two applications – A & B. Application A is developed with “secure” code from the start and application B is developed with no special attention paid to security with security defects that then have to be remediated. The end state would be applications with identical security profiles. So in the case of A you write the application and then don’t have to do anything. In the case of B you have to write the application and then make changes. In this case it is obvious – you should just write the code properly the first time. Duh! But that isn’t really what we need to look at. Because it assumes that it costs the same to write application A and application B and that is what needs further examination.
So I expanded my scenario to look at two teams – A & B.
Team A develops software using a software development lifecycle that involves security. So they have up-front costs like licensing testing tools and training developers. Then during the development of the application they undertake tasks geared toward not introducing security vulnerabilities as well as finding them before the application is released – threat modeling, abuse cases, architectural risk assessments, security code reviews and so on. SDL or touchpoints or whatever. The point here is that their costs are front-loaded and all expended by the time the software is released.
Team B develops software using a software development lifecycle that has no concept of security. (I know this is completely unrealistic and would never happen in the real world, but bear with me for the sake of argument…) So they spent no money or other resources before starting development or during development. Then, time passes and finally someone comes along and assesses the security state of their application. Unsurprisingly they find a bunch of vulnerabilities. So the question is: what is the cost of those fixes? And the next question is: is the cost of fixing these vulnerabilities greater than the cost of all the secure development activities undertaken by team A.
But that really isn’t the point, either, because the tool and training costs get spread across multiple applications and multiple releases. And how do we know which vulnerabilities need to be fixed? And what do we do about the inevitable vulnerabilities that manage to sneak their way into the application built by team A. And why haven’t we talked about business risk, yet? Seems like this gets pretty complicated pretty quickly.
So this has turned into a giant mess way ahead of schedule. What I think everyone will agree on is:
- Developers should write code without vulnerabilities
What is still up for debate is:
- How much to invest in order to get more secure applications
- How to address the risk associated with vulnerabilities in deployed code
Can’t wait to get a link to the @Stake paper so I can just plagiarize their results…
dan _at_ denimgroup.com