How application security can create velocity at enterprise scale

Modern software has completely transformed the way organizations operate and compete in the market. With the increasing demand for secure and reliable software delivered at scale, the pressure to meet time-to-market deadlines has never been greater. To manage software risk and also increase development velocity and agility, organizations are deploying more and more security tools that promise to meet these challenges head-on.  

But this is having the opposite of its desired effect; security tool proliferation has resulted in complexity that has slowed down development teams, decreased overall risk posture, and driven up the operational costs to implement, maintain, and support the software security tech stack. This tool sprawl and the complexity it fosters is not a new problem, but the current economic climate has added pressure on organizations to solve these problems now by consolidating.

The true cost of tool proliferation

In most cases, the burden of resourcing and maintaining duplicative tooling is costing organizations dearly. And this issue is widespread; a recent survey commissioned by Synopsys found that 70% of respondent organizations had more than 10 application security testing (AST) tools within their security program.

And what exactly does this cost look like? The problem is three-fold. First, organizations are forced to contend with overlapping capabilities and overlapping findings, which requires extra time, resources, and effort to wade through the “noise.” Further, organizations are spending unnecessarily on expensive “people resources” to execute and support this surplus of tooling. And perhaps most problematic, it’s taking more time to attain results. The very goal of a security program—eliminating vulnerabilities and weaknesses—is taking too long and offering an incomplete view of risk insight because of siloed and overlapping data.

A successful security program should readily supply answers to questions like: Where is all my software? How secure is it? Are we improving our security efforts? Are we putting our time and resources into the right areas? A security program that cannot answer these questions begs for further analysis.  

Untangling the mess: A programmatic approach to security

So what is the solution to untangling this web of security noise? It lies in measuring what you manage.

Often, we see organizations gathering loads of data and developing policies without the proper context of how they are going to measure success. This results in even more noise. An understanding of how you will measure success should be the foundation of any successful program.

Established success metrics should help drive policies—not the inverse, as is often the case. An organization should identify a small number of meaningful metrics, and then orient its policies around them. These metrics will vary by organization—they could be vulnerability density, time to triage, and time to remediation—but they should ultimately be aligned with what makes sense for the business and its objectives.

In the market today, we see many organizations lining up a slew of policies, performing excessive scans, and then facing a mountain of non-normalized data stemming from many different sources. Then they go in search of meaningful metrics to figure out if they’re doing any good or not. It can be nearly impossible to interpret this data into success or a calculation of ROI.

Again, by starting with a KPI or metric view and then aligning all policies and technologies around the prioritized metrics, an organization has a much higher probability of building a security program that is measurable and most importantly, improvable, over time.

A centralized view of risk is critical

Without insight into and alignment with an underlying risk assessment of your software, you have a constantly moving target. Different pockets of an AppSec program will operate on different views of risk, resulting in a dilution of overall risk information. Centralized data is critical, especially at scale.

But how can an organization achieve a centralized view of risk? It begins with a deep understanding of your inventory. Security teams should gather a comprehensive view of existing software assets and applications, and understand which truly matter.

After gathering this inventory, an organization should run it through a meaningful risk ranking, which will yield the foundation for all further security efforts. When assets are ranked, it is easy for an organization to determine how much effort should be applied to individual pieces of software. This effort of aggregating and normalizing data should take care to consider context; for example, which apps are behind a firewall and therefore not exploitable? Which are most vulnerable to attack?

Beyond the more straightforward effort of consolidating to fewer vendors or to a single platform, another powerful way to mitigate the chaos caused by tool sprawl is to align or normalize all security data in the context of your defined success metrics. With a consolidated view of these success metrics, you can gauge how you are truly running your program, and you can gather the context needed to reduce noise and ultimately arrive at a prioritized view of the issues that need to be fixed first. This cohesive and context-driven view allows true management of a program at scale.

Put simply, a security program run from a single source of truth is possible when your security program makes business decisions based on metrics that truly matter and has data from disparate tools and sources consolidated in one place.

For more information on how Synopsys can help you create velocity at enterprise scale, visit www.synopsys.com/software.

Security