OpenAI’s latest o1 models, officially released Thursday as part of the company’s Shipmas campaign, are reigniting debates about whether we’ve already achieved artificial general intelligence (AGI) without realizing it. The models are designed to “spend more time thinking before they respond,” representing a significant advancement in AI reasoning capabilities.
Wharton professor and AI expert Ethan Mollick suggests that o1 demonstrates how AGI-level systems might arrive gradually rather than as a dramatic breakthrough. “Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed,” Mollick wrote on X. He argues that most people won’t recognize AGI because their daily tasks don’t push the limits of human intelligence.
To help understand AGI development, Mollick has proposed a tier system ranging from Tier 4 (“Co-intelligence” where humans and AI work together) to Tier 1 (machines capable of performing any task better than humans). Tier 3, “Artificial Focused Intelligence,” describes AI that outperforms average human experts in specific intellectually demanding tasks, while Tier 2, “Weak AGI,” would be machines that outperform humans at all tasks in specific jobs—though no such systems currently exist.
Vahid Kazemi, a member of OpenAI’s technical staff, went further, stating: “In my opinion, we have already achieved AGI and it’s even more clear with o1. We have not achieved ‘better than any human at any task,’ but what we have is ‘better than most humans at most tasks.’” This perspective suggests AGI may be defined not by perfection across all domains, but by broad competence across most tasks.
However, more conservative AI experts urge caution. Meta’s chief AI scientist Yann LeCun emphasized in a March podcast appearance that AGI won’t arrive as a single dramatic event. “It’s not going to be an event. It’s going to be gradual progress,” LeCun said, pushing back against Hollywood-style narratives of sudden AI breakthroughs. The debate highlights fundamental disagreements within the AI community about how to define and recognize AGI when it arrives.
Key Quotes
Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed. Most folks don’t have a lot of tasks that bump up against limits of human intelligence, so won’t see it.
Wharton professor Ethan Mollick explains why AGI might arrive without fanfare, suggesting that most people’s daily tasks don’t require the upper limits of human intelligence, making transformative AI capabilities less noticeable in everyday life.
In my opinion, we have already achieved AGI and it’s even more clear with o1. We have not achieved ‘better than any human at any task,’ but what we have is ‘better than most humans at most tasks.’
Vahid Kazemi, a member of OpenAI’s technical staff, argues that current AI systems already meet a practical definition of AGI based on broad competence rather than universal superiority, representing a more optimistic view within the AI community.
It’s not going to be an event. It’s going to be gradual progress.
Meta’s chief AI scientist Yann LeCun pushes back against Hollywood narratives of sudden AGI breakthroughs, emphasizing that achieving human-level AI will be an incremental process rather than a single dramatic moment, representing a more conservative perspective on AGI timelines.
Our Take
The o1 models debate reveals a fascinating paradox: we may be living through the most significant technological transition in human history without recognizing it. The disagreement between Kazemi’s “we’re already there” and LeCun’s “gradual progress” perspectives isn’t just semantic—it reflects fundamentally different views on what constitutes intelligence and how we measure machine capabilities against human performance.
What’s particularly striking is Mollick’s observation that AGI might be invisible to most people because their work doesn’t test the boundaries of human cognition. This suggests a two-tier society emerging: those whose work involves cutting-edge intellectual challenges will immediately recognize AI’s transformative power, while others may not notice until automation directly affects their roles. The incremental nature of this transition may actually be more disruptive than a sudden breakthrough, as it prevents society from mobilizing coordinated responses to the changes already underway.
Why This Matters
This debate over OpenAI’s o1 models represents a critical inflection point in AI development and how we understand progress toward AGI. The discussion matters because it challenges our expectations of what AGI will look like when it arrives—not as a dramatic “Terminator-style” awakening, but as gradual, incremental improvements that may already be here.
For businesses and workers, this has profound implications. If we’re already at or near AGI for many intellectual tasks, organizations need to accelerate their AI adoption strategies rather than waiting for some future breakthrough. The economic and workforce impacts could be more immediate than anticipated, requiring faster adaptation in education, training, and job roles.
The disagreement among leading AI experts also highlights the lack of consensus on fundamental definitions in the field. Whether we call current systems AGI or not affects everything from regulatory approaches to investment decisions to public perception of AI capabilities and risks. As Mollick suggests, the most transformative technology of our era might arrive so gradually that society fails to recognize or prepare for its full impact until it’s already deeply embedded in our lives.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:
Recommended Reading
Related Stories
- Sam Altman’s Bold AI Predictions: AGI, Jobs, and the Future by 2025
- Artificial General Intelligence Could Arrive by 2024, According to AI Experts
- OpenAI CEO Sam Altman Hints at Potential Restructuring in 2024
- OpenAI’s Valuation Soars as AI Race Heats Up
- The AI Hype Cycle: Reality Check and Future Expectations
Source: https://www.businessinsider.com/artificial-general-intelligence-prediction-ethan-mollick-2024-12