Matt Fitzpatrick, CEO of Invisible Technologies, has pushed back against the widespread belief that synthetic data and artificial intelligence will soon replace human workers in AI training. Speaking on the “20VC” podcast, Fitzpatrick addressed what he considers one of the biggest misconceptions in the AI training industry: that human feedback will become obsolete within a few years.
Invisible Technologies, which raised $100 million in September 2024 at a $2 billion valuation, operates in the competitive data labeling space alongside companies like Scale AI and Surge AI. These startups have collectively raised billions as tech giants scramble to secure high-quality training data for their AI models.
Fitzpatrick, who previously served as a senior partner at McKinsey where he led QuantumBlack Labs (the firm’s AI research and software development arm), argues that the complexity and diversity of real-world tasks make human involvement indispensable. “When I first started this job, the main push back I always got was that synthetic data will take over and you just will not need human feedback two to three years from now,” he explained. “From first principles, that actually doesn’t make very much sense.”
Synthetic data refers to artificially created information used to train AI models when real data is scarce or restricted by privacy concerns. While it has its applications, Fitzpatrick contends it cannot replace the nuanced understanding that human workers bring to AI training, particularly regarding language, cultural context, and specialized knowledge.
The CEO pointed to industries like legal services, which contain vast amounts of nonpublic information that requires human expertise to process and contextualize. “On the GenAI side, you are going to need humans in the loop for decades to come,” Fitzpatrick stated. “And I think that is something that most people are starting to realize.”
This perspective is shared across the data labeling industry. Brendan Foody, CEO of Mercor, emphasized in September that data quality depends on “having phenomenal people that you treat incredibly well.” Meanwhile, Garrett Lord, CEO of Handshake (a job platform that pivoted to AI training), noted that while humans remain essential, the industry is shifting from generalists to highly specialized experts in fields like mathematics and science. These millions of human contractors teach AI models everything from coding and math to more abstract qualities like humor and empathy.
Key Quotes
When I first started this job, the main push back I always got was that synthetic data will take over and you just will not need human feedback two to three years from now. From first principles, that actually doesn’t make very much sense.
Matt Fitzpatrick, CEO of Invisible Technologies, challenges the common assumption that AI training will soon become fully automated, arguing that the fundamental logic doesn’t support this prediction.
On the GenAI side, you are going to need humans in the loop for decades to come. And I think that is something that most people are starting to realize.
Fitzpatrick provides his timeline for human involvement in AI training, suggesting the industry is beginning to recognize the long-term necessity of human workers despite earlier predictions of rapid automation.
Now these models have kind of sucked up the entirety of the entire corpus of the internet and every book and video. They’ve gotten good enough where, like, generalists are no longer needed.
Garrett Lord, CEO of Handshake, explains how AI advancement is changing the type of human workers needed—shifting from generalists to highly specialized experts in fields like mathematics and science.
The most important aspect of the business was data quality and having phenomenal people that you treat incredibly well.
Brendan Foody, CEO of Mercor, emphasizes that human talent and worker treatment remain central to producing high-quality AI training data, reinforcing the human-centric nature of the industry.
Our Take
Fitzpatrick’s perspective represents a pragmatic counterpoint to Silicon Valley’s tendency toward technological determinism. While synthetic data has legitimate applications, his argument highlights a fundamental truth: AI models can only be as good as their training data, and complex real-world contexts require human judgment that current AI cannot replicate. The $2 billion valuation of Invisible Technologies suggests investors agree with this assessment. However, the shift toward specialized experts rather than generalists indicates the industry isn’t static—it’s evolving toward higher-skilled, better-compensated roles. This could actually improve working conditions in an industry often criticized for exploitative labor practices. The real question isn’t whether humans will be needed, but rather which humans and under what conditions. As AI capabilities expand, the bar for human contribution rises, potentially creating a more professional and sustainable workforce model.
Why This Matters
This story highlights a critical debate in the AI industry about the future of work and the sustainability of current AI training methods. As generative AI continues to advance, questions about whether machines can train themselves have significant implications for millions of contract workers globally who currently perform data labeling and AI training tasks.
Fitzpatrick’s assertion that humans will remain essential “for decades” challenges the narrative that AI will rapidly automate itself out of human dependency. This has major economic implications for the data labeling industry, which has attracted billions in venture capital investment. It also suggests that concerns about AI completely replacing human workers may be overstated, at least in the near term.
The shift from generalist to specialist workers, as noted by Handshake’s CEO, indicates an evolution rather than elimination of human roles in AI development. This trend could create new opportunities for highly educated professionals while potentially displacing lower-skilled workers. For businesses investing in AI infrastructure, this means budgeting for ongoing human expertise rather than expecting fully automated training pipelines. The debate also underscores fundamental questions about AI’s limitations and the irreplaceable value of human judgment, cultural understanding, and contextual reasoning.