AI security researcher Sander Schulhoff is sounding the alarm about a critical gap in how companies approach AI system vulnerabilities. Speaking on “Lenny’s Podcast,” Schulhoff—who authored one of the earliest prompt engineering guides—revealed that most organizations lack the specialized talent needed to understand and mitigate AI-specific security risks.
The core problem, according to Schulhoff, is that traditional cybersecurity approaches don’t translate to AI systems. While conventional security teams excel at patching software bugs and addressing known vulnerabilities, AI systems operate fundamentally differently. “You can patch a bug, but you can’t patch a brain,” Schulhoff explained, highlighting the mismatch between traditional security thinking and how large language models actually fail.
The disconnect manifests in real-world deployments where cybersecurity professionals review AI systems for technical flaws without considering adversarial manipulation. Unlike traditional software, AI systems can be exploited through language and indirect instructions—a vulnerability that requires entirely different defensive strategies. Schulhoff, who runs a prompt engineering platform and an AI red-teaming hackathon, emphasized that security professionals need to ask: “What if someone tricks the AI into doing something it shouldn’t?”
The solution requires hybrid expertise combining AI security knowledge with traditional cybersecurity skills. Professionals with this dual background would know how to contain AI-generated malicious code by running it in isolated containers, preventing system-wide compromise. Schulhoff believes this intersection represents “the security jobs of the future.”
Schulhoff also criticized the AI security startup ecosystem, warning that many companies are selling guardrails that provide false security. Because AI systems can be manipulated in countless ways, claims of comprehensive protection are misleading. “That’s a complete lie,” he stated, predicting a market correction where “revenue just completely dries up for these guardrails and automated red-teaming companies.”
Despite these concerns, investor interest in AI security remains strong. Google’s $32 billion acquisition of cybersecurity startup Wiz in March exemplifies Big Tech’s commitment to securing AI systems. CEO Sundar Pichai acknowledged that AI introduces “new risks” requiring cybersecurity solutions spanning multiple cloud environments. The growing security concerns around AI models have fueled a wave of startups offering monitoring, testing, and security tools for AI systems.
Key Quotes
You can patch a bug, but you can’t patch a brain
Sander Schulhoff used this analogy to describe the fundamental difference between traditional software vulnerabilities and AI system failures. This statement captures why conventional cybersecurity approaches are inadequate for securing AI systems that behave more like adaptive, unpredictable entities than deterministic code.
There’s this disconnect about how AI works compared to classical cybersecurity
Schulhoff identified the core problem facing organizations deploying AI systems. This disconnect means that even companies with robust cybersecurity teams may be vulnerable to AI-specific attacks that exploit the unique characteristics of large language models and other AI technologies.
That’s a complete lie
Schulhoff’s blunt assessment of AI security startups claiming their guardrails can “catch everything” reveals his skepticism about current market offerings. He predicts a market correction as companies realize these solutions don’t provide the comprehensive protection being promised, potentially leaving organizations exposed.
Against this backdrop, organizations are looking for cybersecurity solutions that improve cloud security and span multiple clouds
Google CEO Sundar Pichai made this statement when announcing the $32 billion Wiz acquisition, acknowledging that AI introduces new risks requiring evolved security approaches. This validates Schulhoff’s concerns from a Big Tech perspective and demonstrates the industry’s recognition of the AI security challenge.
Our Take
Schulhoff’s warnings reveal a uncomfortable truth about the AI boom: we’re deploying transformative technology faster than we’re developing the expertise to secure it. The “you can’t patch a brain” analogy is particularly apt—AI systems require security professionals who understand probabilistic behavior, adversarial prompting, and emergent capabilities, not just code vulnerabilities.
The predicted market correction for AI security startups is significant. It suggests the current wave of “AI security” solutions may be security theater rather than substantive protection, capitalizing on fear without delivering real value. This mirrors earlier technology hype cycles where initial solutions proved inadequate.
Most importantly, the talent gap Schulhoff identifies could become a major bottleneck for safe AI deployment. Organizations need professionals who can think like both hackers and AI researchers—a rare combination. This creates urgency for developing new training programs and certifications that bridge traditional cybersecurity and AI security, while also raising questions about whether we’re moving too fast with AI deployment before adequate security frameworks exist.
Why This Matters
This story highlights a critical vulnerability in the rapidly expanding AI ecosystem: the talent gap between traditional cybersecurity and AI-specific security needs. As organizations rush to deploy AI systems across their operations, this mismatch could expose them to unprecedented risks that conventional security measures cannot address.
The implications extend beyond individual companies. If AI systems can be manipulated through language-based attacks that traditional security teams don’t understand, entire industries could face systemic vulnerabilities. This is particularly concerning as AI becomes embedded in critical infrastructure, healthcare, finance, and other sensitive sectors.
Schulhoff’s warning about AI security startups selling ineffective guardrails suggests an impending market correction that could reshape the AI security landscape. Companies investing heavily in these solutions may find themselves inadequately protected, while the industry needs to develop more sophisticated, realistic approaches to AI security. The prediction that hybrid AI-cybersecurity expertise will define future security jobs signals a fundamental shift in workforce requirements, creating both challenges for existing professionals and opportunities for those who can bridge the gap.
Related Stories
- OpenAI Lost Nearly Half of Its AI Safety Team, Ex-Researcher Says
- How to Comply with Evolving AI Regulations
- CEOs Express Insecurity About AI Strategy and Implementation
- PwC Hosts ‘Prompting Parties’ to Train Employees on AI Usage
- Business Leaders Share Top 3 AI Workforce Predictions for 2025
Source: https://www.businessinsider.com/ai-security-gap-companies-researcher-sander-schulhoff-2025-12