Black Duck Uses AI to Accelerate Software Security Detection

Black Duck Software, formerly Synopsys Software Integrity Group, has successfully integrated generative AI into its cybersecurity operations to dramatically accelerate the delivery of critical security advisories to customers. The Burlington, Massachusetts-based company, which employs approximately 2,000 people, provides security testing, audits, and risk assessments to help organizations protect their software.

Beth Linker, senior director of product management for AI and static application security testing at Black Duck, revealed that the company began deploying generative AI this spring to expedite the creation and distribution of Black Duck Security Advisories (BDSAs)—notifications that alert customers to software vulnerabilities and potential exploits. The initiative emerged from a critical industry challenge: the National Vulnerability Database, a government cybersecurity resource, experienced significant backlogs and began publishing fewer vulnerability reports. Simultaneously, the Linux kernel started flagging substantially more security risks, creating a perfect storm of increased threats and decreased support resources.

“The net effect was that all of a sudden you had a much larger number of vulnerabilities and less support from the National Vulnerability Database,” Linker explained. “This is something that was making things a lot harder for our customers because they were not able to get all the info that they were used to receiving.”

The AI implementation involves Black Duck’s engineering and research teams working with commercially available large language models (LLMs). The company developed specialized prompts that query internal data through these LLMs to compile advisory reports—a process previously done manually. Importantly, human oversight remains integral: researchers review each AI-generated report before customer distribution to ensure quality and accuracy, as “hallucinations are a risk,” according to Linker.

The results have been impressive. Between March and October, Black Duck created over 5,200 AI-powered BDSAs, achieving approximately five times the monthly notification volume compared to pre-AI implementation levels. “We’ve been able to really scale this up to meet the need,” Linker noted.

Looking forward, Black Duck recently unveiled Polaris Assist, an AI-powered security assistant currently in beta testing. This platform enhancement combines existing application security tools with LLMs to provide automated vulnerability summaries and code remediation suggestions, helping security and development teams work more efficiently. Beta testing is expected to conclude by year-end.

Key Quotes

The net effect was that all of a sudden you had a much larger number of vulnerabilities and less support from the National Vulnerability Database. This is something that was making things a lot harder for our customers because they were not able to get all the info that they were used to receiving.

Beth Linker, senior director of product management for AI at Black Duck, explained the critical industry challenge that prompted the company’s AI implementation—a perfect storm of increased security threats and decreased government support resources.

Hallucinations are a risk, and everything we put in front of our customers has to meet a certain standard of quality.

Linker emphasized the importance of human oversight in AI-generated security advisories, acknowledging the limitations of current LLM technology and the company’s commitment to accuracy in security-critical communications.

We’ve been able to really scale this up to meet the need.

Linker described the success of the AI implementation after Black Duck created over 5,200 AI-powered security advisories between March and October, achieving five times the previous monthly notification volume.

A lot of that boils down to how can we make application security testing and remediation easier, faster, and more scalable?

Linker outlined Black Duck’s ongoing AI investment strategy, focusing on practical improvements to security workflows as the company continues developing tools like Polaris Assist.

Our Take

Black Duck’s AI implementation represents a mature approach to enterprise AI adoption that prioritizes measurable outcomes over technological novelty. The five-fold productivity increase provides concrete evidence that generative AI can deliver substantial operational improvements when applied to well-defined problems. Particularly noteworthy is the company’s transparent acknowledgment of AI limitations—specifically hallucination risks—and their implementation of human review processes. This hybrid model may become the template for AI deployment in regulated or high-stakes industries where accuracy is non-negotiable. The transition from internal AI tools to customer-facing products like Polaris Assist also illustrates a common enterprise AI maturity path: prove value internally, then productize. As cybersecurity threats accelerate faster than human analysts can process them, AI-augmented security operations will shift from competitive advantage to baseline requirement. Black Duck’s early success positions them favorably in this evolving landscape.

Why This Matters

This case study exemplifies how enterprise AI adoption is solving real-world business challenges beyond experimental use cases. Black Duck’s success demonstrates AI’s capacity to address critical infrastructure gaps—in this case, government database backlogs—that directly impact cybersecurity operations across industries. The five-fold increase in advisory delivery showcases measurable ROI from AI implementation, providing a concrete benchmark for other security companies considering similar investments.

The story also highlights the emerging hybrid human-AI workflow model that’s becoming standard in high-stakes applications. By maintaining human oversight to prevent AI hallucinations while leveraging automation for scale, Black Duck illustrates responsible AI deployment in security-critical contexts. As software vulnerabilities proliferate and cyber threats intensify, AI-powered security tools will become increasingly essential for organizations to maintain adequate protection. The development of Polaris Assist signals the next evolution: moving from internal operational efficiency to customer-facing AI products that democratize advanced security capabilities across development teams of varying expertise levels.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/black-duck-is-using-ai-for-software-security-detection-2024-11