As artificial intelligence adoption accelerates across industries, a critical gap in AI security and data privacy has emerged, creating both significant risks and opportunities for innovative startups. The proliferation of large language models (LLMs) has introduced new vulnerabilities, particularly around sensitive data leakage and adversarial attacks that threaten companies in highly regulated sectors like finance and healthcare.
The core challenge centers on how AI models are trained on massive datasets that often contain personal or confidential information. Ashish Kakran of Thomvest Ventures highlights a fundamental problem: employees can easily share confidential information with LLMs, and these models currently lack a “forget it” button to remove that data. This creates serious compliance and security concerns for enterprises deploying AI systems in production environments.
A growing ecosystem of security-focused AI startups has emerged to address these threats. Companies like Opaque Systems (backed by Thomvest) enable secure data sharing through confidential computing platforms. Credo AI has raised $41.3 million to provide AI governance solutions that help companies measure and monitor AI risks responsibly. Zendata, with over $3.5 million in funding, focuses specifically on preventing sensitive data leakage when integrating AI into enterprise workflows.
CEO Narayana Pappu of Zendata warns that companies often don’t realize when “shadow AI applications” are deployed, potentially sending user information to unauthorized systems. Even more concerning, customers may be unaware that their shared information could be used to train broader AI models, creating cross-use information leakage problems.
Adversarial attacks represent another growing threat vector. Bessemer Venture Partners’ Lauri Moore notes that security leaders frequently worry about AI tools introducing critical vulnerabilities or “trap doors” for bad actors. A nightmare scenario involves coding agents introducing security flaws into production systems. Prompt injection attacks, which trick models into producing harmful outputs, have become increasingly prevalent with prompt-based language models.
Several startups are pioneering solutions through continuous monitoring approaches. Protect AI raised $60 million in Series B funding to monitor and manage security across the AI supply chain. HiddenLayer secured $50 million in Series A funding for automated threat detection and response. Haize Labs tackles prompt injection vulnerabilities through AI red-teaming, stress-testing LLMs for weaknesses. Meanwhile, enterprise AI unicorn Glean, valued at $4.6 billion, has built security into its core product with strict permissions-aware AI assistants that only access authorized information.
Key Quotes
It’s so easy for an employee to take something that’s confidential and share it with an LLM. LLMs do not have, as of right now, a forget it kind of button…You need safeguards and controls around all of this in the way these LLMs are deployed in production.
Ashish Kakran of Thomvest Ventures explains the fundamental security challenge with large language models, highlighting how the inability to delete or “forget” data creates serious compliance risks for enterprises deploying AI systems.
Companies don’t really know if some shadow AI applications are in place and a bunch of user information is being sent to that. There’s a cross-use of information. There’s information leakage, all of that. And that’s a huge concern with the foundation models or even copilots.
Zendata CEO Narayana Pappu warns about the hidden dangers of unauthorized AI deployments within organizations and how user data may be unknowingly used to train broader AI models, creating serious privacy and security vulnerabilities.
Glean’s AI assistant is fully permissions-aware and personalized, only sourcing information the user has explicit access to.
Arvind Jain, CEO of the $4.6 billion enterprise AI startup Glean, describes how building security principles into core products ensures users only access authorized information, preventing unintended data exposure.
This shift towards automated, continuous evaluation represents a significant evolution in AI safety practices, moving beyond periodic manual assessments to a more proactive and comprehensive approach.
Arvind Ayyala, partner at Geodesic Capital, explains how the AI security industry is evolving from reactive, manual security checks to automated, continuous monitoring systems that can detect threats in real-time across the AI supply chain.
Our Take
The emergence of a dedicated AI security startup ecosystem mirrors the early days of cloud computing, when companies initially rushed to adopt new technology before security infrastructure matured. What’s particularly noteworthy is the diversity of approaches—from confidential computing and governance platforms to red-teaming and continuous monitoring—suggesting that AI security requires multi-layered defenses rather than single solutions.
The “shadow AI” problem Pappu describes is especially concerning because it represents a governance gap that traditional IT security tools weren’t designed to address. As AI capabilities become embedded in everyday productivity tools, the attack surface expands exponentially. The substantial venture funding flowing into this sector indicates that investors view AI security not as a niche concern but as foundational infrastructure for the next decade of enterprise technology. Companies that integrate security from the ground up, like Glean, will likely have significant competitive advantages over those treating it as an afterthought.
Why This Matters
This development represents a critical inflection point in AI adoption, where security concerns could either accelerate or significantly hinder enterprise implementation of artificial intelligence. As companies rush to integrate AI capabilities, the security infrastructure must evolve simultaneously to prevent catastrophic data breaches, regulatory violations, and adversarial exploits.
For highly regulated industries like healthcare and finance, inadequate AI security could result in massive FINES, legal liability, and loss of customer trust. The emergence of specialized security startups signals that the market recognizes these risks and is actively building solutions, which should provide confidence to enterprises hesitant about AI adoption.
The $150+ million in combined funding raised by AI security startups mentioned in this article demonstrates strong investor conviction that this sector will become essential infrastructure. As AI becomes more deeply embedded in business operations, the security layer will likely become as critical as cybersecurity is today. Companies that fail to implement proper AI security controls risk not only immediate data breaches but also long-term competitive disadvantage as regulations tighten and customer expectations around data privacy increase.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Outlook Uncertain as US Government Pivots to Full AI Regulations
- Photobucket is licensing your photos and images to train AI without your consent, and there’s no easy way to opt out
- Tech Tip: How to Spot AI-Generated Deepfake Images
- Elon Musk Warns of Potential Apple Ban on OpenAI’s ChatGPT
Source: https://www.businessinsider.com/security-threats-ai-models-rise-new-startups-2024-10