Developers Face Rising Pressure Managing AI Expectations and Risks

Software developers are navigating a complex new reality as generative AI transforms their profession, requiring them to balance stakeholder expectations with technical limitations and growing security concerns. At a Business Insider roundtable in November, industry leaders from Meta, Slack, Amazon, Slalom, and Nice discussed how AI is reshaping development roles and career trajectories.

Neeraj Verma, head of applied AI at Nice, made a striking observation: generative AI “makes a good developer better and a worse developer worse.” He noted that some companies now expect all employees to function as developers, using AI to create webpages or HTML files by simply copying and pasting AI-generated solutions. However, the roundtable participants emphasized that foundational coding skills remain essential for effectively leveraging AI tools.

While AI excels at routine tasks like writing boilerplate code and translating between programming languages, developers stressed that coding represents just one aspect of their responsibilities. As AI adoption accelerates, testing and quality assurance are becoming increasingly critical for verifying AI-generated work accuracy. The US Bureau of Labor Statistics projects 17% growth in software developers, quality-assurance analysts, and testers over the next decade, reflecting this evolving landscape.

Igor Ostrovsky, cofounder of Augment, highlighted a key challenge: “Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior.” He explained that consistently delivering delightful user experiences with AI remains difficult and risky. Recent AI launches have illustrated these challenges—Microsoft’s Copilot faced issues with oversharing and data security, prompting the company to develop internal risk management programs.

The investment disparity is notable: Microsoft plans to spend over $100 billion on GPUs and data centers for AI by 2027, yet tech giants invest comparatively less in AI governance, ethics, and risk analysis. Kesha Williams, head of enterprise architecture and engineering at Slalom, suggested that developers can bridge communication gaps with stakeholders by outlining specific AI use cases, which helps highlight potential pitfalls while maintaining strategic focus.

Ostrovsky predicted that employee engagement with AI will evolve significantly, requiring developers to maintain “a desire to adapt and learn and have the ability to solve hard problems” in this rapidly changing technological landscape.

Key Quotes

Generative AI makes a good developer better and a worse developer worse.

Neeraj Verma, head of applied AI at Nice, made this observation at the Business Insider roundtable, highlighting how AI tools amplify existing skill levels rather than equalizing them. This challenges the assumption that AI will democratize software development.

Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior.

Igor Ostrovsky, cofounder of Augment, explained this paradox to illustrate why productivity expectations often overshadow critical concerns about AI ethics and security. The ease of use masks the complexity of ensuring consistent, reliable AI behavior.

ChatGPT is just another tool to help write some of the code that fits into the project.

Neeraj Verma emphasized that good developers understand how code integrates into larger projects, positioning AI as a supplementary tool rather than a replacement for fundamental development skills and architectural thinking.

Developers will need to have a desire to adapt and learn and have the ability to solve hard problems.

Igor Ostrovsky’s prediction about the future of software development emphasizes that technical adaptability and problem-solving capabilities will become even more crucial as AI technology rapidly evolves and transforms development workflows.

Our Take

This article captures a pivotal moment in software development where hype collides with reality. The expectation that “everybody’s a developer” through AI reveals a fundamental misunderstanding of what developers actually do—they’re architects, problem-solvers, and quality gatekeepers, not just code writers. The Microsoft Copilot security issues serve as a cautionary tale about rushing AI deployment without adequate governance. What’s particularly striking is the massive infrastructure investment versus minimal governance spending, suggesting companies are building powerful AI systems without proportional safety mechanisms. The 17% job growth projection is encouraging, but it also signals a shift: developers must evolve from pure coders to AI supervisors and stakeholder educators. The real competitive advantage will belong to organizations that recognize AI as a tool requiring expert guidance, not a magic solution that eliminates the need for technical expertise. Companies that fail to manage expectations and invest in proper AI governance risk security breaches, poor user experiences, and ultimately, failed AI initiatives.

Why This Matters

This story reveals a critical tension in the AI revolution: the gap between AI’s perceived capabilities and its practical limitations. As companies rush to integrate generative AI, they’re creating unrealistic expectations that developers must manage while addressing serious security and quality concerns. The finding that AI “makes good developers better and worse developers worse” has profound implications for workforce development and hiring strategies.

The 17% projected growth in developer and QA roles contradicts fears of AI-driven job displacement, suggesting AI is creating new responsibilities rather than eliminating positions. However, the disparity between infrastructure investment ($100+ billion) and governance spending highlights a dangerous imbalance that could lead to security breaches, ethical violations, and user trust erosion. For businesses, this underscores the need to invest not just in AI technology but in the human expertise, testing frameworks, and governance structures required to deploy it safely and effectively. The emphasis on foundational skills and stakeholder communication suggests that soft skills and technical judgment will become increasingly valuable as AI handles more routine coding tasks.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/software-developers-manage-stakeholder-expectations-coding-gen-ai-tech-risks-2024-12