A growing movement within Silicon Valley’s tech elite is radically reshaping their lives based on beliefs about artificial intelligence’s imminent transformation of society—or its potential to end humanity entirely. AI researchers, investors, and entrepreneurs are making extreme lifestyle changes, from building bioshelters to abandoning retirement savings, as they prepare for what they see as civilization’s most critical inflection point.
Henry, an AI safety researcher at a Bay Area lab, estimates a 50/50 chance that AI will pose an existential threat within years. He’s sworn off romantic relationships, donates a third of his income to AI safety nonprofits, and is constructing DIY bioshelters for under $10,000 using positively pressurized tents, HEPA filters, and three years of supplies to protect against AI-created pathogens.
The movement has spawned the “smart-to-hot” lifestyle shift, where people are pivoting from intellectual pursuits to physical fitness and social skills. Apoorva Srinivasan, a biomedical data scientist, now prioritizes “charisma, social engagingness, and hotness” over intelligence in dating, believing generative AI will subsume intellectual labor. Tech entrepreneur Soren Larson and AI consultant Jason Liu have similarly embraced fitness and leisure, with Liu optimizing his career for delegation and free time rather than hustle.
Financial decisions are being radically altered. Daniel Kokotajlo, a former OpenAI researcher who quit over safety concerns, stopped saving for retirement in 2020. Anthropic researcher Trenton Bricken publicly shared he’s done the same, questioning the value of retirement accounts when AGI may arrive before he turns 60. Conversely, others see these years as their “last chance” to accumulate generational wealth before human intellectual labor becomes obsolete.
Personal relationships are fracturing over AI beliefs. Holly Elmore, executive director of Pause AI, divorced her husband partly due to disagreements over how to address AI risks. Her ex-husband Ronny Fernandez acknowledged “significant chance that smarter than human AI will literally kill approximately everyone” but disagreed with her confrontational approach toward AI labs.
Fetish researcher Aella describes living more in the moment, spending down savings, trying hard drugs, and “throwing weird orgies” while facing what she sees as potential human extinction. Venture capitalist Vishal Maini advocates “paleo-futurism”—prioritizing human interaction over increasingly engaging AI-generated digital content—and adopting a “bucket-list mentality” for remaining time.
Key Quotes
A lot of us are just going to look back on these next two years as the time when we could have done something. Lots of people will look back on this and be like, ‘Why didn’t I quit my job and try to do something that really mattered when I had a chance to?’
Henry, an AI safety researcher building bioshelters, expresses the urgency felt by many in the AI safety community who believe humanity has only a narrow window to address existential AI risks before it’s too late.
I personally did not want to be valued for my intelligence. I was like, this intelligence is what physically hurt me, and caused me to lose my job.
Jason Liu, an AI consultant who pivoted from software engineering after a repetitive strain injury, explains his embrace of the ‘smart-to-hot’ lifestyle shift, now prioritizing leisure, fitness, and social connection over intellectual pursuits.
It’s really freeing in some ways. I like throwing weird orgies, and I’m like — well, we’re going to die. What’s a weirder, more intense, crazier orgy we can do? Just do it now.
Aella, a fetish researcher and sex worker with concerns about AI destroying humanity, describes how existential AI fears have led her to live more in the moment, spend down savings, and embrace experiences she might otherwise avoid.
There is a significant chance that smarter than human AI will literally kill approximately everyone, or lead to even worse outcomes, within a few decades.
Ronny Fernandez, manager of Lighthaven (a Rationalist intellectual campus), acknowledges the severity of AI risks even while disagreeing with his ex-wife’s confrontational activism approach—illustrating how even those who share concerns differ on solutions.
Our Take
What’s most striking isn’t that some technologists fear AI—it’s the caliber and proximity of those preparing for catastrophe. These aren’t fringe conspiracy theorists but AI safety researchers, former OpenAI employees, and Anthropic staff with insider knowledge. Their actions suggest either genuine insight into alarming development trajectories or a form of collective delusion within an insular community.
The “smart-to-hot” movement reveals an uncomfortable truth: if AI can replicate intellectual work, humanity’s comparative advantage shifts to irreducibly physical and social domains. This isn’t just lifestyle advice—it’s a fundamental reassessment of human capital in the AI age. Meanwhile, the financial behaviors—abandoning retirement savings or desperately accumulating wealth—represent hedging strategies for radically uncertain futures. Whether AI brings utopia, catastrophe, or something between, these preparations illuminate how those building the technology genuinely believe we’re approaching a civilizational inflection point, not a gradual evolution.
Why This Matters
This story reveals how deeply AI concerns have penetrated Silicon Valley’s consciousness, moving beyond abstract debates into concrete life decisions among those closest to the technology. These aren’t casual observers—they’re AI researchers, safety experts, and industry insiders with privileged access to development trajectories, making their extreme preparations particularly noteworthy.
The “smart-to-hot” phenomenon signals a fundamental reassessment of human value in an AI-dominated economy. If intellectual labor becomes automated, society may reorganize around uniquely human attributes like physical presence, charisma, and social connection—potentially reshaping education, career planning, and social hierarchies.
The financial implications are profound: when AI researchers stop saving for retirement or rush to accumulate wealth before “the music stops,” it suggests genuine uncertainty about economic continuity. This could influence investment strategies, policy discussions, and public perception of AI timelines. The movement also highlights the growing rift between AI accelerationists and safety advocates, with personal relationships dissolving over these philosophical divides. Whether prescient or paranoid, these preparations reflect the tech industry’s belief that transformative AI isn’t a distant possibility—it’s an imminent reality demanding immediate action.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: