3 ways to fix old, unsafe code that lingers from open-source and legacy programs

Companies that find themselves with old, vulnerable code in their environment are likely to be short of resources to fix them. Most organizations will find themselves in this situation at some point, whether it’s because they are using open-source programs or outdated ones. But there are ways for companies to get the problem under control, including prioritization, automation, and mitigation.

The problem of old, bad code is ubiquitous in enterprises. Vulnerable code in general is a problem — according to a Veracode report released earlier this year, 74% of scanned applications during the previous year had at least one security flaw, and 19% had a high-severity vulnerability. And the older the application, the more likely it is to have problems, says Veracode’s chief research officer Chris Eng. When new applications are scanned for the first time, 32% of them have security flaws. At the five-year mark, that jumps to 70%. By the time an application is 10 years old, there’s a 90% chance it has at least one security flaw.

One reason for the growth in problems is that new code is added to applications — according to Veracode, applications grow 40%, on average, every year for the first five years. Each new line of code adds potential for mistakes, and more complexity makes it harder to spot and fix problems.

Components are also a big part of the issue. “Most developers, when they download an open-source component and incorporate it into their application, never go back and update it,” Eng tells CSO. To be more exact, 79% of the time, old open-source components are not updated. Over time, new vulnerabilities keep getting discovered even if a new line of code is never added, “the security posture of that application is getting worse and worse and worse,” Eng says.

According to Synopsys’ open-source security and risk analysis released in February, 96% of all commercial code bases contained open-source components — and 89% contained open-source code that was more than four years out of date. The problem is so important to application security that vulnerable third-party libraries are on the OWASP top ten list of web application security risks.

The solution seems simple on the surface: just replace the component with the newest version. That’s easy when the code base is relatively new. “If a patch is released tomorrow, it’s fairly easy to go and do that patch,” Eng says. “But if I linger for years and I get multiple versions behind, then the amount of effort required to get current is tremendous.”

With each new update, especially major releases, there are more possibilities for changes in behaviors. Key functionality might be deprecated. That means that if the component is updated to the latest version, the application as a whole might stop working. “When you get several major versions behind, there’s definitely breakage that will happen,” Eng says.

When organizations find themselves in this situation, the best thing to do is to focus on the most critical problems first.

Identify and prioritize the bad code that poses the highest risk

Different vulnerabilities might have different impacts on an organization. In some cases, there might be a security flaw in a particular function, but the application isn’t using that function, therefore making a system less vulnerable to that particular vulnerability. Then there’s the question of whether exploits have been seen in the wild and the types of attack activities that are being observed.

Context is also important. “If you have an archaic, really old application but it’s deployed on a security network that nobody has access to, maybe one or two network people, then the impact of it is not as high,” Adam Brown, managing consultant at Synopsys Software Integrity Group tells CSO. “Even if it’s classified as a super high-risk critical vulnerability, if you spent a lot of money fixing it, who’s benefiting?”

In regulated industries, there might also be compliance issues related to fixing code since any changes have to be vetted. This tends to happen in the financial and health sectors. For some old applications, it might make more sense to replace instead of repair. “Sometimes it’s easier to build a new model and slowly migrate to it than to try to fix the old one,” Brown says. “You may spend time fixing a critical vulnerability but ultimately what you should be doing is figuring out how to re-platform.”

Synopsys released its own software vulnerability report earlier this month, based on 12,000 tests run over the past three years, focusing on web and mobile applications. According to the report, 92% of applications had vulnerabilities, 33% of them in the high or critical severity categories.

When prioritizing, companies also need to be careful not to be swayed by news headlines. “There is a tendency among organizations to assess immediate threats based on the current level of media attention rather than the actual risk level,” says Mike DeNapoli, director and cybersecurity architect at Cymulate, a cybersecurity vendor. Cymulate released a report this past March tracking the state of cybersecurity effectiveness, which also highlighted the presence of insecure code in organizations. According to the report, average risk scores worsened compared to the previous year.

The biggest issue with prioritizing software fixes is that there’s often a disconnect between security controls and business risk outcomes, according to Kayne McGladrey, IEEE senior member and field CISO at Hyperproof, a security and risk company. That makes it harder to get executive support, he says. Code maintenance and dependency management aren’t sexy topics. Instead, executive interest tends to focus “on the financial or reputational repercussions of downtime,” McGladrey tells CSO.

“To address this problem, organizations should document and agree upon the business risks associated with both first-party and third-party code. Then they need to determine how much risk they’re willing to accept in areas like reputational damage, financial damage, or legal scrutiny. After there’s executive-level consensus, business owners of critical systems should work to identify and implement controls to reduce those risks,” McGladrey says.

Once a company identifies its highest-priority issues, the next step is to fix the problems. Unfortunately, that’s not always possible, and organizations need to look to other measures.

When the only answer is mitigation

When it comes to old systems, there might not be anyone around with the needed knowledge to fix the code. According to a survey released last November by technology services company Advanced, 42% of companies that use mainframes say that their most prominent legacy language is COBOL, with another 37% still using Assembler.

“Never mind the job market. It’s hard to find people alive with obsolete programming language skills like COBOL,” says Paul Brucciani, cyber security advisor at WithSecure.

Another issue is when the source code has been lost. “You’d be surprised by the [number of] organizations running on ancient software that can’t be updated because they lost the source code,” Brucciani tells CSO.

In some cases, the applications are too important to touch because the risk of breaking them is too high and replacing them would cause too much disruption. “Not all legacy code and applications can be removed when discovered. In many cases, critical business processes rely on features and workflows that are performed by the legacy systems,” says Cymulate’s DeNapoli.

Software vulnerabilities might also not get fixed because of insufficient time or resources, or because of compliance considerations, but still pose a risk if exploited. In these cases, companies should put mitigation measures in place around the vulnerable systems. Firms will need to use other strategies such as implementing or strengthening compensating controls.

Zero trust architectures, network segmentation, and an increased focus on authentication can help lower the risk that a vulnerable application is exploited. “There’s a broad trend to put everything behind an authentication layer,” says Veracode’s Eng. “That’s happening regardless of how old the code is.”

Other mitigation strategies include encryption, firewalls, security automation, and dynamic data backups.

Automation to find old code and create safer code

The latest solution to the problem of vulnerable old code involves new advances in artificial intelligence. We already have generative AI tools that can write new code, but vendors are also working on specialized AIs that are specifically trained in fixing vulnerabilities. “AI can suggest a fix and then developers can tweak that a bit,” says Eng.

The problem is that when companies use the big, public large language models, those models are trained on everything, including the bad stuff. “As they used to say, garbage in, garbage out. Inevitably, the code that is generated by those models is also going to contain vulnerabilities. So, the code will be produced faster — but it will still have errors,” Eng adds.

Veracode is building its own AI based on its own, vetted code. “We generate vulnerable code, and good code, and train the model on each of those categories,” Eng says. “Then we know for sure that what’s coming out is not being pulled from some random developer’s Github repository.”

Veracode Fix was launched this past April and, according to the company, the product can generate fixes for 72% of flaws found in Java code, which can dramatically speed up remediation efforts for companies.

At some point, larger enterprises will probably want to build their own, customized, AI tools. “They want to generate fixes in the style of code that they use,” Eng says.

But that doesn’t mean that companies should sit back and wait until AIs can come and solve all the problems. “With the amount of security debt that most organizations have, even if you just work on the most severe stuff now, you’re not going to run out of stuff to do,” he says.

Security, Security Practices, Vulnerabilities