[Author’s note: Please excuse the gender-specific term “dude” in the title of this blog post. It is important for everyone to realize that application security testing programs can be implemented just as ineffectively using the “Lady With a Scanner” approach.]
While talking to organizations about how they manage application security risk, I’ve had far too many exchanges like this:
Me: What are you all doing to address application security risk?
Person: We bought [Vendor]’s scanner.
The problem is the exchange usually continues along these lines:
Me: Great. Have you all been running scans?
Person: We ran all kinds of scans the week we got the license and found a bunch of stuff. I emailed the PDF to the development team. I guess I should check back with them…
The broader problem here is too many organizations equate an application security program to scanning, or at a minimum, view application security solely through the lens of the specific testing technology they’ve purchased – most often a dynamic web application scanner (DAST, or Dynamic Application Security Testing, in Gartner-parlance). Folks who are interested in reviewing an example of a comprehensive software security program would be well served to take a look at OWASP’s OpenSAMM project. It lays out the various components of a comprehensive software security program as well as measurements for maturity in each of these areas.
Today, however, let’s focus on how organizations use dynamic scanners. All too often we see organizations take a tactical and ad hoc approach to rolling out DAST technologies. To really get value from a DAST investment, organization should focus on:
- Good Scan Coverage – How thoroughly is the application being tested?
- Tracking Progress Over Time – How frequently is an application being scanned and is trending tracked over time?
- Scans Across the Application Portfolio – What percentage of the organization’s applications – especially high risk and high value applications – are subject to scanning?
Let’s drill a bit deeper into these areas and talk about approaches to get better results and how ThreadFix can help organizations reach these goals to maximize the value of their DAST investment.
Get Good Scan Coverage
Too many “scans” of applications only manage to test the public-facing parts of an application because the scanner wasn’t properly trained to log in to the application and maintain its session state. If you run your first scan of a 500 page web application and the scanner doesn’t find anything, it is far more likely that you got a bad scan than that your development team managed to not make any security mistakes. No matter what kind of “performance improvements” your scan vendor made in their latest version, if your scan takes less than a minute to run you’re going to want to see what the scan actually covered. Make sure your scanner understands an application’s login routine and can properly detect when it is in or out of a valid session.
In addition, you need to make sure that your scanner is getting a good crawl of the application to properly detect the application’s attack surface. Has it found all the pages? Has it found all the parameters, cookies and other entry points? Many applications – especially public-facing applications – have landing pages that link back to the main part of the application, but are never linked to from the application proper. In the absence of a good crawl or discovery phase of the scanning process, the resulting scan will have gaps where there are potentially significant portions of the application that have not been subject to testing.
To help with scanner coverage, we’ve built scanner plugins for OWASP ZAP and Burp that can pull attack surface data from a ThreadFix server and use that to seed the scanning process. More information about these integrations can be found in the documentation for these scanner plugins (OWASP ZAP, Burp) and we have another blog post discussing scanner seeding in more detail. Track Progress Over Time
To be effective, security scanning can’t be a one-time activity – scanning needs to be performed repeatedly over time to identify when new vulnerabilities are introduced and when existing vulnerabilities have been remediated. How frequently scanning should be performed can depend on a number of factors such as:
- How risky or valuable is the application? Applications considered to be “high risk” or applications that represent high value to the organizations might be tested more frequently.
- What is the velocity of development for the application? Applications that are under intense active development may need to be scanned more frequently. Applications on “end of life” status that are not subject to updates still require periodic scanning – scanning technologies get better and new attacks are developed – but it may be reasonable to decrease scanning tempo.
Also, different types of scans might be run at different times. It might be possible to run a quick scan for common vulnerabilities like SQL injection and cross-site scripting (XSS) after every developer check-in or on a nightly basis, whereas more thorough scans could be run weekly. Manual penetration testing will probably be done even less frequently – perhaps in the run-up to a major release.
ThreadFix provides a number of reports that can be valuable tracking the results of scanning over time. The Trending report shows when new vulnerabilities are found, old vulnerabilities are remediated, and when previously-closed vulnerabilities resurface: The Progress By Vulnerability report shows the percentage of varying vulnerabilities that have been fixed, the average time it took to fix those vulnerabilities, and the average age of remaining vulnerabilities. This data shows how well your development teams are doing at remediating identified vulnerabilities. Remember: finding vulnerabilities is easy; fixing vulnerabilities is valuable. The velocity with which organizations can address security vulnerabilities, or Mean Time To Fix (MTTF) as some industry analysts refer to it, is an important indicator of how well a software security program is functioning. This report can benchmark your organization’s performance against data sets such as WhiteHat’s Website Security Statistics Report or Veracode’s State of Software Security report. One thing to note: these reports can be run for a specific Application, for all the applications managed by a Team, or for all Applications in the organization’s portfolio. And speaking of your organization’s application portfolio…
Track Across Your Portfolio
Scanning a single application isn’t going to cut it. Nor is scanning the applications that are “easy.” Instead, an effective scanning program has to start with a solid understanding of the organization’s application attack surface, risk rank the applications in that attack surface, and then subject applications to scanning – and other testing – based on the perceived risk associated with these applications.
ThreadFix let’s you organize your application portfolio into Teams responsible for collections of applications and then into individual Applications. Each Application can be risk-ranked. This provides a central location to store information about the portfolio. ThreadFix also offers a handy Portfolio Report that breaks out, by application risk ranking, what applications have been scanned recently and which applications have not be scanned in some time. This can help security managers identify gaps in coverage and appropriately target scanning efforts toward the highest-risk assets. Conclusions
Far too often we see organizations base their application security posture on the fact that they’ve purchased a DAST scanner. There are a number of problems with this “approach” – specifically a lack of attention to detail about the quality of the scans being performed, a lack of tracking the results of assurance activities over time, and a failure to take a portfolio-based view of these activities.
Contact us for help evolving your “dude with a scanner” into an effective full-fledged scanning program.