On Saturday, March 28, 2015 at the OWASP SAMM Summit in Dublin, a group of Application Security leaders announced a new project that they had been working on since the summer of 2014: the industry’s first public benchmarking data for improving software security. The leaders’ vision is to offer companies a comparative data set, allowing them to see what similar organizations are doing to ensure data security is being maximized at the software development level.
Initially, the OpenSAMM consortium recognized that application security is still viewed by many organizations as akin to good hygiene. However, just as many people struggle with flossing, getting flu shots every year, or changing the oil in their car on a regular basis, application security is something most organizations know they need to do, but yet always seem to accomplish a subset of what is advised. To this day, organizations continue to release software on the web or in the app store that hasn’t been fully vetted for security flaws or vulnerabilities. However, with no publicly available comparison data around software security practices, organizations were left to wonder if they were undertaking the right sets of activities to make their software resilient from the most basic attacks. For some who struggled with this issue, it seemed that there was nothing to compel companies to invest in application security, short of the pressure of external audits or mandates. If the consortium could pull off what they intended, a public set of benchmarking data could change the way companies looked at what they were doing, as compared to their peers and, perhaps more companies would fully grasp the case for software application security within the enterprise.
The consortium, a group of senior and C-level AppSec leaders, quickly recognized that there was a shared interest to move the needle on application security and began taking stock of available options. The team decided to focus its energies on the existing OpenSAMM framework as a starting point for contributing comparison data to allow for any organization interested in strengthening their security posture to participate and benchmark against their peers.
While OpenSAMM had not been updated for some time, and contained no comparative data, the benefit of using the framework is that granular data can be contributed from different teams within an organization and rolled up to the company level. Most agreed OpenSAMM had great name recognition, even if it was in serious need of a facelift. In addition, as denoted by the term “open,” the OpenSAMM data repository is not connected to a vendor, which was attractive to many of the application security product and services companies that were standing by to contribute.
Given that there are and will be numerous contributing parties, the data model that the consortium was developing had to be straightforward, easy to understand by all, and ultimately able to anonymize the data submitted by the contributing organization. For that reason, random identifiers are assigned to denote each organization the data relates to, the team within the organization, and the organization that performed the assessment. Furthermore, the data can be classified by the vertical and region from where the data relates.
By tackling the issue of anonymity first, the consortium created a data schema that is able to provide confidence to contributing organizations that their data would not be attributable. They addressed this issue by leveraging existing confidentiality agreements of members with contributors to add an additional level of trust that submitted information would remain secured. In addition, improvements in the OpenSAMM data collection process and neutral hosting by OWASP will provide confidence that ultimately encourages a broader set of companies to contribute their internal and client benchmarking data.
Unfortunately, software security guidance has been too general and often too difficult for organizations to navigate. OpenSAMM’s benchmarking capability when combined with proven best practices, however, is seen by the consortium as the most beneficial and effective course of action to allow organizations to successfully tailor their application security improvement roadmap. To that end, the consortium hopes that the improvements in the OpenSAMM data collection process and neutral hosting by OWASP will encourage a broader set of companies to contribute their internal and client benchmarking data. Providing a way for companies to evaluate the level of their software security compared to others is an important step in achieving more secure software everywhere.
All of these pieces are falling into place as there are now over 30 data sets which will be available in the coming months. These sets will be used for benchmarking and the consortium plans to reach 60-100 sets by AppSecUSA, taking place in San Francisco, CA, on September 22, 2015. Having this initial benchmarking data is crucial in showing vendor agnostic data sets, and establishing a comfort level for broader organizations to contribute or at least view the results that have been published. If you are interested in learning more about the OpenSAMM benchmarking data, or contributing, please contact John Dickson or Dan Cornell.