AI expert Gary Marcus has issued a stark warning that OpenAI may be on the verge of becoming “the most Orwellian company of all time” by pivoting toward mass surveillance capabilities. Speaking at Stanford’s Center for Human-Centered Artificial Intelligence alongside Google veteran Peter Norvig, Marcus predicted that OpenAI will be pressured to become a surveillance company as its current business model struggles to justify its massive valuation.
Marcus, an academic and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust” and “Taming Silicon Valley,” argues that OpenAI’s core AI technology isn’t reliable enough to sustain its business through enterprise licensing alone. According to Marcus, while OpenAI sold the dream of universal AI in 2023 and attracted pilot studies from major corporations, 2024 has brought widespread disappointment. The technology remains plagued by hallucinations and fundamental errors, making it unsuitable for production environments, causing businesses to adopt a more cautious approach.
However, Marcus identifies a lucrative alternative revenue stream: surveillance. AI’s ability to synthesize vast amounts of data quickly makes it invaluable for government agencies investigating citizens or political campaigns targeting specific voters. This concern isn’t merely theoretical—OpenAI’s June appointment of former NSA director Paul Nakasone to its board sparked intense criticism and lent credibility to surveillance fears.
Whistleblower Edward Snowden called Nakasone’s appointment a “calculated betrayal to the rights of every person on Earth,” warning that the intersection of AI with decades of accumulated mass surveillance data would place “truly terrible powers in the hands of an unaccountable few.” Marcus urged OpenAI employees to voice their opposition, saying they should declare: “I don’t want to be part of this.”
This isn’t Marcus’s first criticism of OpenAI. After meeting CEO Sam Altman on Capitol Hill, Marcus found him impressive but questioned his sincerity regarding AI safety concerns. While Marcus previously hoped Altman could steer the company as a “force for good,” he now believes that possibility has evaporated. He points to OpenAI’s planned restructuring from a nonprofit to a for-profit entity as evidence that “the mask has really come off” and the original mission has been abandoned. The promise to investors to become fully for-profit means the company’s original idealistic vision has sailed, Marcus told Business Insider. OpenAI did not respond to requests for comment.
Key Quotes
My guess is that OpenAI is going to become the most Orwellian company of all time. What they’re going to be pressed to do is become a surveillance company.
Gary Marcus, AI expert and author, made this prediction during a discussion at Stanford’s Center for Human-Centered Artificial Intelligence. This statement reflects his belief that OpenAI’s business pressures will push it toward surveillance applications as its core AI products fail to meet enterprise reliability standards.
OpenAI sold the dream of universal AI for all purposes, and in 2023 practically every big company ran pilot studies on that premise. But in 2024, a lot of reports from the field are about disappointment: The technology isn’t reliable enough yet for production, because it is plagued with problems like hallucinations and boneheaded errors.
Marcus explained to Business Insider why OpenAI’s current business model is unsustainable. This quote highlights the gap between AI hype and reality, explaining why the company might seek alternative revenue streams like surveillance.
The intersection of AI with the ocean of mass surveillance data that’s been building up over the past two decades is going to put truly terrible powers in the hands of an unaccountable few.
Edward Snowden, the famous NSA whistleblower, issued this warning when OpenAI appointed former NSA director Paul Nakasone to its board. Snowden’s statement underscores the potential dangers of combining AI capabilities with existing surveillance infrastructure.
In the last few months, the mask has really come off, and I seriously doubt he will ever return to the mission. The promise to investors to turn into a for-profit basically means that ship has sailed.
Marcus told Business Insider about his evolving view of OpenAI CEO Sam Altman and the company’s direction. This reflects his belief that OpenAI’s restructuring to a for-profit entity has definitively ended any hope of returning to its original mission of developing AI for the benefit of humanity.
Our Take
Marcus’s warning represents a sobering reality check for the AI industry at a critical juncture. The fundamental tension between OpenAI’s astronomical valuation and its technology’s actual capabilities creates dangerous incentives that could push the company toward ethically questionable applications. The surveillance pathway offers immediate monetization of AI’s genuine strength—data processing—without requiring the general intelligence that remains elusive.
What’s particularly concerning is the pattern: a company founded on utopian principles of beneficial AI potentially pivoting to surveillance capitalism under financial pressure. This mirrors broader tech industry trends where initial idealism gives way to profit maximization. The Nakasone appointment wasn’t accidental—it signals strategic positioning in the surveillance market.
The broader implication is that without strong regulatory frameworks, AI companies will naturally gravitate toward surveillance applications because that’s where the reliable revenue exists today. Marcus’s call for employee resistance may be the last check on this trajectory, as internal dissent has historically influenced tech company decisions. The AI industry must confront whether it will build tools for human flourishing or unprecedented social control.
Why This Matters
This warning from a prominent AI researcher highlights critical concerns about the commercialization and potential weaponization of advanced AI systems. As OpenAI faces pressure to justify its estimated $157 billion valuation, the company’s strategic direction could set precedents for the entire AI industry. The surveillance concerns are particularly significant given AI’s unprecedented ability to process massive datasets—a capability that could fundamentally alter the balance of power between institutions and individuals.
The appointment of a former NSA director to OpenAI’s board signals potential alignment with intelligence and surveillance interests, raising questions about the company’s commitment to its original mission of ensuring AI benefits humanity. This development comes as OpenAI transitions from its nonprofit roots to a for-profit structure, suggesting that commercial pressures may override ethical considerations.
For businesses, this represents a cautionary tale about AI’s current limitations and the gap between marketing promises and practical reliability. For society, it underscores the urgent need for robust AI governance frameworks and transparency requirements before these technologies become deeply embedded in surveillance infrastructure. The intersection of AI capabilities with existing surveillance data could create unprecedented tools for social control, making this a pivotal moment for AI policy and regulation.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- OpenAI’s Valuation Soars as AI Race Heats Up
- Sam Altman’s Bold AI Predictions: AGI, Jobs, and the Future by 2025
- Outlook Uncertain as US Government Pivots to Full AI Regulations
- New York City Turns to AI-Powered Scanners in Push to Secure Subway
Source: https://www.businessinsider.com/openai-orwellian-surveillance-gary-marcus-nsa-nakasone-2024-10