Google offers free access to fuzzing framework

Fuzzing can be a valuable tool for ferreting out zero-day vulnerabilities in software. In hopes of encouraging its use by developers and researchers, Google announced Wednesday it’s now offering free access to its fuzzing framework, OSS-Fuzz.

According to Google, tangible security improvements can be obtained by using the framework to automate the manual aspects of fuzz testing with the help of large language models (LLMs). “We used LLMs to write project-specific code to boost fuzzing coverage and find more vulnerabilities,” Google open-source security team members Dongge Liu and Oliver Chang and machine language security team members Jan Nowakowski and Jan Keller wrote in a company blog

So far, OSS-Fuzz and its expanded fuzzing coverage offered by LLM-generated improvements have allowed Google to discover two new vulnerabilities in cJSON and libplist, even though both widely used projects had already been fuzzed for years, they noted. Without the completely LLM-generated code, these two vulnerabilities could have remained undiscovered and unfixed indefinitely, they added.

Fuzzing is an automated test

“Fuzzing has been around for decades and is gaining popularity with its success in finding previously unknown or zero-day vulnerabilities,” says John McShane, senior security product manager at the Synopsys Software Integrity Group, a provider of a security platform optimized for DevSecOps. “The infamous Heartbleed vulnerability was discovered by security engineers using Defensics, a commercial fuzzing product.”

Fuzzing can catch a lot of “low-hanging fruit,” but it can also expose some high-impact items, like buffer overflows, adds Gisela Hinojosa, head of cybersecurity services at Cobalt Labs, a penetration testing company. “Since fuzzing is an automated test, it doesn’t need a babysitter,” she says. “It’ll just do its thing, and you don’t really have to worry about it. It’s a relatively easy way to find vulnerabilities.”

Fuzzing not a substitute for secure-by-design tactics

However, Shane Miller, an advisor to the Rust Foundation and a senior fellow at the Atlantic Council, an international affairs and economics think tank, in Washington, DC, cautions, “Investments in dynamic testing tools like fuzzing are not a substitute for secure-by-design tactics, like choosing memory-safe programming languages, but they are a powerful tool for improving the security of software.”

“Fuzzing expands the scope of testing by exploring software behavior with unexpected inputs that can reveal vulnerabilities like those exploited in recent state-sponsored cyberattacks targeting US water treatment plants, electric grid, oil and natural gas pipelines, and transportation hubs,” Miller adds.

While fuzzing can be beneficial to developers, its manual aspects have been a deterrent to open-source maintainers from fuzzing their projects effectively — a problem Google hopes to address by offering free access to its fuzzing framework. “Since open-source maintainers are often volunteers who have no or limited funding, taking the time and paying for the cost of running resource-intensive tools isn’t always feasible,” says Michael J. Mehlberg, CEO of Dark Sky Technology, a software supply chain security company. 

“Even if it is,” Mehlberg continues, “fuzzing tools can complicate an otherwise simple build environment, can produce a large number of false positives that generate review and analysis work for an already stretched team, and may produce actions that cannot be taken due to inadequate cybersecurity skills or experience.”

Safety, not automation, the most important part of patching

Google is also offering guidance to developers and researchers for using LLMs to build an auto-patching pipeline. “This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers,” the Google security team members wrote in their blog.

While using LLMs to automate patching is an interesting idea, Hinojosa notes that the challenge will be for the LLM to have all the contextual knowledge it needs to patch effectively without breaking things. “I think it would be a good idea for the automated system to suggest a fix, but for a human to manually review it before it’s implemented.”

“Overall, the most important part of patching is not automation but safety,” adds Dave (Jing) Tian, an assistant professor of computer science at Purdue University. “It turns out that it’s non-trivial to prove that a patch does exactly what it should — nothing more or less,” he says. “So, for now, only a limited number of patches can be injected automatically. Those patches are simple ones, such as changing a 32-bit integer to 64-bit integer for a variable. For more complex patches, we still need and should ask domain experts to review them after the patch is injected by AI.”

App Testing, DevSecOps, Vulnerabilities