Ex-AWS & IBM Exec Warns: Avoid AI Codependency & Intellectual Atrophy

Sol Rashidi, a veteran tech executive with experience at IBM, AWS, Sony Music, and Estée Lauder, is sounding the alarm about the risks of developing an unhealthy codependency on artificial intelligence. With over 200 AI deployments under her belt spanning 15 years, Rashidi has transitioned from building AI capabilities at major corporations to running her own company focused on workforce preparation for the AI era.

Rashidi’s central concern is what she calls “intellectual atrophy” — the loss of cognitive ability and critical thinking skills that occurs when people outsource their thinking to AI tools rather than using them strategically. She warns that just as muscles atrophy without use, so does the brain when we rely too heavily on generative AI tools like ChatGPT. Her key insight: “The big thing that you’ve got to be careful of is making sure that generative AI doesn’t make your thinking become generic, because everyone else is also using ChatGPT.”

Despite her warnings, Rashidi herself uses six to eight AI tools daily, primarily for data processing, pattern recognition, and insights generation. However, she maintains strict boundaries: she never uses AI to write emails, keynotes, or personal communications, believing that authentic communication requires human touch and practice. Her guiding principle is simple: “Am I using this to accelerate work I have to do, or am I using it to do the work for me?”

Rashidi shared a cautionary tale from her time managing a data science team at a Fortune 500 company. A junior data scientist produced the same deliverable as senior scientists in half the time by relying on ChatGPT, but had short-circuited the crucial processes of research and verification. This led Rashidi to implement a new mandate: AI could only facilitate and accelerate research, not replace it. She bluntly told her team: “I’m paying for your brain and uniqueness. I’m not paying you to copy and paste, because, quite frankly, a license for enterprise API from OpenAI is a lot cheaper than you.”

The former executive emphasizes that in our current society that “values convenience over competition and speed over substance,” the key to staying competitive is actually slowing down and developing “discernment muscles” — the ability to distinguish signal from noise. With a large percentage of worldwide content now AI-generated, and AI systems being retrained on AI-generated content, she warns we’re approaching a point of diminishing returns. Problem-solving skills, verification, and validation will become increasingly critical as AI becomes more prevalent in the workplace.

Key Quotes

The big thing that you’ve got to be careful of is making sure that generative AI doesn’t make your thinking become generic, because everyone else is also using ChatGPT.

Sol Rashidi emphasizes the risk of losing competitive advantage when everyone relies on the same AI tools. This highlights a critical paradox: tools designed to enhance productivity could actually commoditize thinking if used without discernment.

Am I using this to accelerate work I have to do, or am I using it to do the work for me?

Rashidi’s guiding question for AI tool usage reflects her philosophy that AI should augment human capability rather than replace it. This simple framework helps distinguish between productive AI use and dependency.

I’m paying for your brain and uniqueness. I’m not paying you to copy and paste, because, quite frankly, a license for enterprise API from OpenAI is a lot cheaper than you.

Rashidi’s blunt message to her data science team at a Fortune 500 company underscores the economic reality: employees who simply copy-paste AI outputs add little value beyond what the AI subscription itself provides, making them vulnerable to replacement.

We live in a society right now that values convenience over competition and speed over substance. But the key to keeping up is actually slowing down.

This counterintuitive insight challenges the prevailing rush to adopt AI for speed gains, suggesting that thoughtful, deliberate engagement with information and problems will become a competitive differentiator in an AI-saturated world.

Our Take

Rashidi’s perspective represents a mature, nuanced view of AI adoption that the industry desperately needs. Her experience deploying AI at scale gives her credibility that pure theorists lack. The intellectual atrophy concept is particularly compelling because it frames AI risk not as job displacement but as cognitive skill degradation — a more insidious threat that could leave workers unprepared even for jobs that remain.

What’s striking is her personal discipline: using 6-8 AI tools daily while maintaining strict boundaries around creative and communicative work. This suggests the future of work isn’t about rejecting AI but about strategic, intentional use. Her junior data scientist example perfectly illustrates how short-term productivity gains can mask long-term skill erosion.

The timing is critical: as enterprises rush to implement AI, few are considering the human capital implications of widespread adoption. Rashidi’s focus on workforce preparation and amplification rather than elimination offers a roadmap for responsible AI integration that preserves human value.

Why This Matters

This perspective from a seasoned AI executive with hands-on experience at tech giants carries significant weight for the future of work. As organizations rush to implement AI tools to boost productivity, Rashidi’s warnings about intellectual atrophy represent a crucial counterbalance to the prevailing narrative of AI as an unqualified benefit. Her insights matter because they come from someone who has successfully deployed AI at scale, not a skeptic opposing technological progress.

The workforce implications are profound: if employees become overly dependent on AI for basic cognitive tasks, they risk becoming commoditized and replaceable by cheaper AI subscriptions. This creates a paradox where the tools meant to augment human capability could actually diminish it. For businesses, this raises important questions about training, AI governance, and maintaining competitive advantage through human creativity and critical thinking.

Rashidi’s focus on developing “discernment muscles” addresses a growing concern in the AI era: as more content becomes AI-generated and AI systems train on AI-generated content, the quality and reliability of outputs may degrade. Organizations and individuals who maintain strong verification, validation, and critical thinking capabilities will have a significant advantage in navigating this landscape.

Source: https://www.businessinsider.com/former-aws-ibm-exec-ways-not-become-dependent-ai-2025-12