In a striking revelation about the race toward artificial general intelligence (AGI), Alexander Embiricos, who leads product development for Codex at OpenAI, has identified an unexpected obstacle: human typing speed. Speaking on “Lenny’s Podcast” this Sunday, Embiricos argued that the “current underappreciated limiting factor” preventing AGI advancement is “human typing speed” and “human multi-tasking speed on writing prompts.”
AGI, or artificial general intelligence, represents a theoretical milestone where AI systems can reason at or beyond human capability across all domains. It’s the ultimate prize that major AI companies including OpenAI, Google DeepMind, and Anthropic are racing to achieve first. Embiricos’ perspective highlights a fundamental paradox in current AI development: while AI agents can potentially work at superhuman speeds, they remain bottlenecked by human operators who must write prompts and validate their output.
“You can have an agent watch all the work you’re doing, but if you don’t have the agent also validating its work, then you’re still bottlenecked on, like, can you go review all that code?” Embiricos explained. His solution involves rebuilding systems to allow AI agents to operate autonomously by default, reducing human intervention in the workflow.
The OpenAI executive predicts this transformation will unlock “hockey stick growth"—a term describing exponential acceleration after an initial flat period. He envisions a phased rollout beginning in 2025: “Starting next year, we’re going to see early adopters starting to hockey stick their productivity, and then over the years that follow, we’re going to see larger and larger companies hockey stick that productivity.”
Embiricos acknowledges there’s no simple path to fully automated workflows, noting that each use case will require customized approaches. However, he believes AGI will emerge somewhere between when early adopters achieve productivity gains and when tech giants fully automate their processes with AI agents. “That hockey-sticking will be flowing back into the AI labs, and that’s when we’ll basically be at the AGI,” he stated.
This perspective from a senior OpenAI product leader offers rare insight into how the company views the practical barriers to AGI development—suggesting the challenge isn’t just about building smarter AI, but about creating systems that can operate independently without constant human oversight.
Key Quotes
You can have an agent watch all the work you’re doing, but if you don’t have the agent also validating its work, then you’re still bottlenecked on, like, can you go review all that code?
Alexander Embiricos, OpenAI’s Codex product lead, explained the fundamental limitation of current AI agent workflows—humans remain the bottleneck for validation and quality control, even when AI can generate work at superhuman speeds.
If we can rebuild systems to let the agent be default useful, we’ll start unlocking hockey sticks.
Embiricos outlined his vision for removing human bottlenecks by creating systems where AI agents operate autonomously by default, which he believes will trigger exponential productivity growth.
Starting next year, we’re going to see early adopters starting to hockey stick their productivity, and then over the years that follow, we’re going to see larger and larger companies hockey stick that productivity.
The OpenAI executive provided a specific timeline for when autonomous AI agents will begin delivering transformative productivity gains, starting with early adopters in 2025 and scaling to larger enterprises thereafter.
That hockey-sticking will be flowing back into the AI labs, and that’s when we’ll basically be at the AGI.
Embiricos described how productivity gains from autonomous AI agents will create a feedback loop that accelerates AI development itself, ultimately leading to AGI—suggesting the breakthrough may come from practical deployment rather than pure research.
Our Take
Embiricos’ framing of human typing speed as AGI’s bottleneck is both revealing and concerning. It exposes OpenAI’s philosophy that AGI requires removing humans from the loop entirely—not just augmenting human capabilities, but replacing human oversight. This represents a significant departure from earlier “human-in-the-loop” AI safety approaches.
The confidence in his 2025 timeline for early productivity breakthroughs suggests OpenAI has internal data showing their agents are already capable of autonomous operation in controlled environments. However, the rush to eliminate human validation raises serious safety questions. Code review exists not just for speed, but for security, ethics, and catching edge cases AI might miss.
Most striking is his vision of a recursive improvement loop where deployed AI agents accelerate AI research itself—essentially describing an intelligence explosion scenario that AI safety researchers have long warned about. If OpenAI is betting on this feedback mechanism to achieve AGI, the timeline to transformative AI may be shorter than most anticipate.
Why This Matters
This statement from an OpenAI executive reveals a critical shift in how AI leaders conceptualize the path to AGI. Rather than focusing solely on model capabilities, the industry is now grappling with human-AI workflow integration as a fundamental bottleneck. This has profound implications for software development, knowledge work, and the broader economy.
Embiricos’ vision of AI agents that validate their own work represents a move toward autonomous AI systems that could dramatically accelerate productivity—but also raises significant questions about oversight, safety, and job displacement. If AI systems no longer require human validation, entire categories of quality assurance and review work could be automated.
The timeline Embiricos suggests—with early adopters seeing “hockey stick” productivity gains starting in 2025—indicates OpenAI believes practical AGI applications are imminent, not decades away. This accelerated timeline should concern policymakers, business leaders, and workers who may have underestimated how quickly AI could transform workflows. The feedback loop he describes, where productivity gains “flow back into AI labs,” suggests a potential exponential acceleration once autonomous agents prove their value in real-world applications.