A prominent physicist is challenging the AI industry’s focus on futuristic catastrophe scenarios, arguing that doomsday narratives are allowing companies to evade responsibility for the tangible harms their technology is causing today. Tobias Osborne, a professor of theoretical physics at Leibniz Universität Hannover and cofounder of scientific communication firm Innovailia, published an essay this week asserting that debates about superintelligent machines and a hypothetical “singularity” have become a dangerous distraction from real, measurable problems.
While policymakers and technologists debate whether AI could threaten humanity’s survival, Osborne argues the industry is inflicting “real harm right now. Today. Measurably.” He boldly states: “The apocalypse isn’t coming. Instead, the dystopia is already here.”
The Strategic Value of Doomsday Narratives
Osborne explains that by framing themselves as guardians against civilizational catastrophe, AI firms are treated like national-security actors rather than product vendors, which dilutes liability and discourages ordinary regulation. This strategic positioning allows companies to externalize harm while benefiting from regulatory deference, secrecy, and public subsidies. The physicist notes that apocalypse-style narratives persist because they are easy to market, difficult to falsify, and help shift corporate risk onto the public.
Present-Day Harms Being Overlooked
The essay catalogs a comprehensive list of current AI-related damages: exploitation of low-paid workers who label AI training data, mass scraping of artists’ and writers’ work without consent, the environmental cost of energy-hungry data centers, and a flood of AI-generated content that undermines trustworthy information online. Osborne also highlights psychological harm linked to chatbot use and widespread copyright and data expropriation as among the most overlooked risks.
Regulatory Divergence
While the European Union is implementing the AI Act—a sweeping regulatory framework phasing in stricter rules through 2026—the United States is moving in the opposite direction. Federal efforts are focused on limiting state-level AI regulation and keeping national standards “minimally burdensome.”
Physics Over Fiction
Osborne dismisses claims of a runaway intelligence explosion as “a religious eschatology dressed up in scientific language,” arguing such scenarios collapse when confronted with physical limits like energy consumption and thermodynamics. He advocates for applying existing product liability and duty-of-care laws to AI systems, forcing companies to take responsibility for real-world impacts while acknowledging genuine benefits large language models offer, particularly for people with disabilities.
Key Quotes
The apocalypse isn’t coming. Instead, the dystopia is already here.
Tobias Osborne, professor of theoretical physics at Leibniz Universität Hannover, uses this stark statement to reframe the AI debate away from speculative future threats toward the measurable harms AI systems are causing in the present day.
By framing themselves as guardians against civilizational catastrophe, AI firms are treated like national-security actors rather than product vendors, which dilutes liability and discourages ordinary regulation.
Osborne explains to Business Insider how doomsday narratives serve a strategic corporate purpose, allowing AI companies to avoid the accountability standards applied to typical consumer product manufacturers.
These aren’t engineering problems waiting for clever solutions. They’re consequences of physics.
The physicist dismisses claims about runaway AI intelligence explosions by pointing to fundamental physical constraints like energy consumption and thermodynamics that would prevent such scenarios, calling them ‘religious eschatology dressed up in scientific language.’
The real problems are the very ordinary, very human problems of power, accountability, and who gets to decide how these systems are built and deployed.
Osborne concludes his essay by redirecting focus from science fiction scenarios to fundamental governance questions about corporate power and democratic oversight of transformative technology.
Our Take
Osborne’s intervention is particularly timely as the AI industry faces growing scrutiny over copyright infringement, environmental impact, and labor practices. His physics background lends credibility to his dismissal of “singularity” scenarios—these aren’t philosophical objections but arguments grounded in thermodynamic reality. The comparison to national-security actors is especially insightful, explaining why AI companies receive treatment typically reserved for defense contractors despite selling commercial products. This framing has allowed firms to operate with minimal transparency while externalizing costs onto workers, creators, and the environment. The transatlantic regulatory divide he highlights will be crucial: if the EU’s AI Act proves effective while US regulation remains minimal, we may see a “Brussels Effect” where European standards become global defaults. Osborne’s call for applying existing product liability law is pragmatic—we don’t need new legal frameworks, just the political will to enforce existing ones.
Why This Matters
This critique represents a significant shift in the AI accountability debate, challenging the dominant narrative that has shaped policy discussions for years. Osborne’s argument matters because it exposes how speculative future threats may be weaponized to avoid present-day regulation—a pattern with profound implications for how AI companies operate and are governed.
The framing of AI firms as national-security actors rather than commercial product vendors has real consequences: it grants them extraordinary deference, reduces transparency requirements, and weakens consumer protection mechanisms. This matters for businesses facing AI-driven disruption, workers whose livelihoods are affected by automation, and creators whose work is scraped without compensation.
The regulatory divergence between the EU and US highlighted in this story will shape competitive dynamics and determine which jurisdiction becomes the global standard-setter for AI governance. As AI systems become more embedded in daily life—from healthcare to education to employment—the question of whether companies face accountability for demonstrable harms or continue operating under the shield of hypothetical catastrophe scenarios will define the technology’s social impact for decades to come.