Denim Group has been acquired by Coalfire. Learn More>>

Vulnerability Manager: A Unified Approach to Vulnerability Data

This another in a series of blog posts where we will be going through the internals of the Vulnerability Manager as well as our future plans.  The hope is to explain the approach we have taken as well as solicit thoughts on improvements or different approaches we may want to look into.  You can submit bugs and feature requests here.

As the name implies, vulnerabilities have an important role to play in Vulnerability Manager.  I posted earlier about our tracking of Applications in the system.  This post describes the Vulnerabilities that get attached to Applications.  Our immediate requirements were:

·Create a schema allowing us to import from all the leading static and dynamic analysis tools

·Be able to identify duplicate vulnerabilities (ie if the same results file with 10 results is uploaded twice there should be only 10 vulnerabilities in the system)

·Be able to support automated and semi-automated merging of vulnerability results from multiple tools

·Normalize imported vulnerabilities into a common classification scheme

·Normalize imported vulnerabilities into a common severity scheme

We started by looking at the sort of information we can extract from the scanning tools in order to determine the maximum amount of data we could extract – either directly or via some interpretation – from every tool.  This provided us with a baseline that can be extended with additional metadata when more data was available, but with a fundamental set of constructs we could design the major functionality around.

For dynamic scanning tools we created an AttackSurfaceLocation object with a VulnerabilityType, a relative URL and, depending on the VulnerabilityType, a parameter.  This is heavily web-focused and that is because most of the market-leading dynamic testing tools are specifically for web-based applications.  We may look down the road at incorporating fuzzers or other dynamic testing tools, but for now a web-based dynamic attack surface serves our needs well.

For static scanning tools we created a CodeLocation object with a VulnerabilityType, a code filename and a line number.  This supports both web and non-web based applications.  Many static analysis tools include source code snippets and column information with their results so those can be imported and stored, but they are not required for the basic operation of the system.  However they are required to do cooler merging operations later, so tools that provide more data in their output files tend to have better support in other areas.

We created a VulnerabilitySourceInfo object to describe a result pulled from a scanning tool.  Each scanning tool gets a subclass (XYZVulnerabilitySourceInfo) that must provide either a AttackSurfaceLocation object or a CodeLocation object (or optionally both, although we do not have any importers that provide both yet).

However, what the system actually cares about are Vulnerabilities so we created a Vulnerability object that has one or more VulnerabilitySourceInfo objects attached to it.  When importing a new results file, a series of VulnerabilitySourceInfo objects are created.  They are compared to all other VulnerabilitySourceInfo objects for all Vulnerabilities in the Application.  Exact matches – that is VulnerabilitySourceInfo objects of the same subclass with either an identical AttackSurfaceLocation or CodeLocation – are discarded because we have already seen a result from the same scanning technology for the same vulnerability.  If there isn’t an exact match already in the system then a new Vulnerability object is created and the VulnerabilitySourceInfo object is attached to it.

We also do some automated and manual merging where existing Vulnerability objects are matched up with additional VulnerabilitySourceInfo objects, but we will discuss the merge process in a later post because it is somewhat tricky and we are evolving the way we do this.  This is really cool when it works and we are looking forward to collecting more real-world data on success rates for doing this automagically.

With regard to normalizing the classification of vulnerabilities, we took the approach of creating an internal classification scheme that is basically a superset of all the vulnerability types supported by all the importers created thusfar.  When we add a new importer we either map its vulnerability types to existing vulnerabilities or add new ones for those not already supported.  All of this is done in an ever-expanding Java enum.

Obviously, this is a sub-optimal approach because it is proprietary, messy and more than a little bit brittle.  For future releases we are moving toward using the MITRE CWE with possible custom annotations where required.  This has the advantage of being a classification scheme that is well-known and accepted.  Down the road it should facilitate industry benchmarking.  Also we’re moving the implementation into the database so the data lives in the database and not in the code.  This work is currently underway.

Normalizing the vulnerabilities to a common severity scheme is also still in progress.  It actually isn’t in the “tech preview” release but we have it running on internal builds.  We went with a scheme where we classify vulnerabilities as High, Medium and Low (there is also an Info level for non-Vulnerability data stored alongside scan results).  Basically we just created a mapping between each supported tool’s native severity scheme to our High, Medium and Low levels.  This works, but we are not ecstatic about it.  Anyone have suggestions for a better scheme?  As with our move to CWE we would like something industry-recognized that can have a first-guess automatically pulled or interpreted from scan results, with the option to re-classify manually later.  Please feel free to suggest ideas here.

Future considerations include supporting imports from manual testing and providing better reporting and metric tracking.  Given our internal data structures none of these should be terribly difficult to incorporate – we just have some other things to finish up first.  Keep watching this blog for more posts about Vulnerability Manager internals, and please share any thoughts or suggestions.

Contact us for more information about finding and fixing your application vulnerabilities.


dan _at_


Posted via email from Denim Group’s Posterous

About Dan Cornell

Dan Cornell Web Resolution

A globally recognized application security expert, Dan Cornell holds over 15 years of experience architecting, developing and securing web-based software systems. As the Chief Technology Officer and a Principal at Denim Group, Ltd., he leads the technology team to help Fortune 500 companies and government organizations integrate security throughout the development process. He is also the original creator of ThreadFix, Denim Group's industry leading application vulnerability management platform.
More Posts by Dan Cornell

Leave a Reply

Your email address will not be published. Required fields are marked *