Today, we’re seeing way more companies taking up application security testing as part of their agile process. Which is a good thing, of course. As applications become more complex, with more add-ons and bug fixtures, application testing has become akin to risk mitigation than spending time and effort in securing the entire application.
Much of this shift can be attributed to how much time security testing takes up in the software development lifecycle, with manual labour being the biggest factor. In order to reduce how much time engineers spend manually triaging vulnerabilities, companies are embracing the use of security automation into their Continuous Integration/Continuous Development (CI/CD) pipeline.
Justifiably so, since automation helps reduce the manual labour, increases security of applications, and aids in faster release cycles. In other words, companies understand the value of integrating security as part of their DevOps pipeline.
Another factor driving the adoption of security automation is the maturity and availability of a variety of security tools (both licensed and open source). There are different tools for each stage of an application’s development cycle, which includes SAST tools (white box testing), DAST tools (dynamic testing), IAST tools (interactive testing), and RASP (Runtime Application Security Protection).
Each of these tools has its own strengths and weakness. Therefore, companies implementing DevSecOps most likely have multiple tools integrated into their pipeline to get a comprehensive coverage.
Enhancing application security, increasing efficiency, and reducing manual labor through automation are all great. But there’s a problem. It raises the question of how companies deal with the different results they get from these wide range of testing tools. This probably hits home for a lot of developers out there that have had late nights dealing with multiple reports, trying to make sense of it all. Here are some of the key factors that cause problems when it comes to automated security testing results.
Issue #1: Vulnerability Naming Convention
There’s a lack of a common vulnerability naming convention in the industry. Different tools, created by different organisations, tend to use their version of names when referring to the various vulnerabilities that the tools detect. This becomes disastrous when your engineers have to deal with the wide range of reports and manually organise the reported vulnerabilities. Adding to this problem is the repeated vulnerabilities; the engineers might end up creating multiple bug tracking tickets into their bug tracking system for the same bug.
Issue #2: Report Format
In addition to the different names, each of the tools yield their results in different formats. Even if you narrow down the reporting format into JSON format or XML format, it is usually machine-readable, and takes up more time and energy to making it human-readable.
Issue #3: False positives & Risk scores
All tools come with a certain margin of error. Depending on the context, each of these tools can report with high or low false positives. In addition, these reports have their own rating of risks associated with those flaws. One can only go through them manually to figure of which of them are false positives or true positives, and which of them of pose a risk to the application.
Check it out: Automatically assign and manage false positives
An Application Vulnerability Correlation (AVC) tool would present a possible solution to the issues mentioned above. This correlation tool should have the following features:
Solution #1: Correlation and Deduplication
The AVC tool should be able to correlate and consolidate different results across static, dynamic and code composition analysis. This gives the engineering team a more in-depth understanding of the flaw, during runtime and as code. As tools tend to also have commonly recognised flaws, but varied naming conventions, the tool should be able to de-duplicate these flaws and provide a unique result. This eliminates the need for manual aggregation of scan results from multiple sources.
Solution #2: Risk Prioritisation
The different testing tools tend to differ in their risk rating of the vulnerabilities they recognise. The AVC tool should combine these different risk scores and rate them based on the industry standards, such as the application security index (score) or CWE. The benefit of using something like CWE is that it helps in normalisation with respect to naming conventions of vulnerabilities. In addition, this AVC tool should use an intelligent tagging mechanism that tags false positives, which over time can help the tool to reduce future false positives.
Solution #3: Defect Tracking
Most of you probably already use a defect tracking tool like JIRA in your pipeline. So, a good AVC would take all the unique (correlated) results and raise bug tickets in your defect tracking system so as to reduce manual involvement. Raising flaws in defect trackers also increases visibility of application security to the engineering team, resulting in a faster turnaround time for remediation. These flaws should be categorised based on severity, allowing engineering to prioritise which ones to address first.
Solution #4: Reporting
Furthermore, this AVC consolidated report should also be user friendly, as in, it should be in a human readable format. This should not be at the expense of losing details of the vulnerabilities. The engineers need as much reliable information they can have to remediate those vulnerabilities. In addition, the report should include a comprehensive remediation advisory
The headaches induced by dealing with multiple reports in a CI/CD pipeline can be reduced dramatically by a correlation tool. By not using a correlation tool, you are doing yourself a disservice, even if you’re automating security into your pipeline. It not only reduces manual labour, it also increases visibility of vulnerabilities, and enables faster closure of security issues throughout your secure SDLC.