Ex-OpenAI AGI Chief: AI Systems Will Do Any Remote Job Within Years

Miles Brundage, former head of policy research and AGI readiness at OpenAI, has provided a striking timeline for artificial general intelligence development, predicting that within the next few years, AI systems will be capable of performing “anything a person can do remotely on a computer.” Speaking on the Hard Fork tech podcast, Brundage outlined a future where AI can operate mouse and keyboard functions and even convincingly appear as a human in video chats.

Brundage’s predictions align with other prominent voices in the AI industry. John Schulman, OpenAI cofounder and research scientist who departed in August, similarly believes AGI is only a few years away. Dario Amodei, CEO of OpenAI competitor Anthropic, has suggested an even more aggressive timeline, projecting some form of AGI could emerge as soon as 2026.

After spending over six years at OpenAI, Brundage announced his departure last month. During his tenure, he played a crucial role in advising executives and board members on AGI preparedness and was instrumental in developing key safety research innovations, including external red teaming—a process that brings outside experts to identify potential problems in AI products before release.

The timing of Brundage’s exit comes amid a wave of high-profile departures from OpenAI, particularly among safety researchers and executives. Some of these departures have been accompanied by concerns about the company’s balance between rapid AGI development and adequate safety measures. However, Brundage emphasized that his decision to leave was not driven by specific safety concerns about OpenAI.

“I’m pretty confident that there’s no other lab that is totally on top of things,” Brundage told Hard Fork, suggesting that safety challenges are industry-wide rather than specific to OpenAI. Instead, his departure was motivated by a desire for greater independence and broader impact. He explained two primary reasons: first, his inability to work on cross-cutting industry issues beyond OpenAI’s internal policies, including broader regulatory questions; and second, his wish to be perceived as independent rather than as a “corporate hype guy.”

Brundage stressed the need for governments to prepare for these technological shifts, urging policymakers to consider implications for taxation, education investment, and workforce development as AI systems become capable of performing remote work at human levels.

Key Quotes

Governments should be thinking about what that means in terms of sectors to tax and education to invest in

Miles Brundage emphasized the need for proactive government planning in response to AI systems that will soon be capable of performing any remote computer work, highlighting the urgent policy implications of near-term AGI development.

I’m pretty confident that there’s no other lab that is totally on top of things

Brundage’s candid assessment suggests that safety challenges in AGI development are industry-wide problems, not limited to OpenAI, indicating that no AI company has fully solved the challenge of ensuring safe development as capabilities rapidly advance.

One is that I wasn’t able to work on all the stuff that I wanted to, which was often cross-cutting industry issues. So not just what do we do internally at OpenAI, but also what regulation should exist and so forth

Brundage explained his departure from OpenAI, revealing that his role was too narrowly focused on internal company issues when he wanted to address broader regulatory and industry-wide policy questions about AGI development.

I didn’t want to have my views rightly or wrongly dismissed as this is just a corporate hype guy

Brundage articulated his desire for independence, recognizing that his position at OpenAI could undermine his credibility as a policy advocate, demonstrating awareness of potential conflicts between corporate interests and objective safety research.

Our Take

Brundage’s departure and predictions represent a critical inflection point for the AI industry. His timeline—systems capable of any remote work within years—isn’t science fiction but reflects insider knowledge of actual development trajectories. What’s particularly striking is his decision to leave OpenAI specifically to gain independence for policy advocacy, suggesting that the most pressing AGI challenges may be regulatory and societal rather than purely technical. The convergence of timelines from multiple AI leaders (2026-2028) should serve as a wake-up call for policymakers, educators, and business leaders who have treated AGI as a distant concern. The admission that no lab is “totally on top of things” regarding safety is perhaps the most concerning revelation, indicating that the industry may be approaching transformative capabilities without adequate safeguards in place. This underscores the urgent need for external oversight, robust regulation, and independent safety research—exactly the work Brundage now seeks to pursue outside corporate constraints.

Why This Matters

This story carries significant weight for multiple reasons. First, it provides insider perspective from someone who was directly responsible for preparing OpenAI—the world’s most prominent AI company—for AGI. Brundage’s timeline suggests that transformative AI capabilities are imminent, not distant, which has profound implications for workforce planning, education systems, and economic policy.

The convergence of predictions from multiple AI leaders (Brundage, Schulman, Amodei) suggests this isn’t speculative hype but reflects genuine technical progress toward systems that can perform any remote knowledge work. This could fundamentally reshape labor markets, potentially displacing millions of remote workers while creating new opportunities and challenges.

The ongoing exodus of safety researchers from OpenAI, even as the company races toward AGI, raises critical questions about whether safety measures are keeping pace with capability development. Brundage’s acknowledgment that “no other lab that is totally on top of things” regarding safety suggests the entire industry may be moving faster than its ability to ensure safe deployment. His decision to leave for independent policy work also highlights a growing need for external oversight and regulation as AI capabilities approach human-level performance across domains.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/openais-former-head-of-agi-talks-about-where-were-headed-2024-11