Generative AI chatbots are fundamentally transforming how Americans seek legal and medical advice, creating both opportunities and challenges for professionals in these fields. According to a December 2025 survey by legal software company Clio, 57% of consumers have used or would use AI to answer legal questions, while a 2025 Zocdoc survey revealed that one in three Americans use generative AI tools for health advice weekly, with one in ten consulting them daily.
Legal and medical professionals are now encountering clients who arrive armed with AI-generated information, often copied directly from ChatGPT or Google’s Gemini. Jonathan Freidin, a Miami medical malpractice attorney, reports receiving client contact forms filled with emojis and formatting that reveal copy-pasted AI content. These clients frequently believe they have viable cases because AI told them medical professionals “fell below the standard of care,” though this doesn’t necessarily translate into actionable legal claims.
The shift is forcing professionals to fundamentally change how they interact with clients. Jamie Berger, a New Jersey family law attorney, explains that clients now arrive with generic, AI-generated legal strategies that may not fit their specific circumstances. “We have to dispel the information that they were able to obtain versus what is actually going on in their case and kind of work backwards,” Berger notes. The attorney-client relationship now requires rebuilding trust and explaining why AI’s linear approach doesn’t account for the complex offshoots of real legal proceedings.
In healthcare, AI chatbots offer something doctors increasingly cannot: unlimited time and immediate availability. As patients wait months for specialist appointments and battle insurance companies, ChatGPT provides instant responses without saying “your list is too long.” Hannah Allen, chief medical officer at AI medical scribe tool Heidi, observes that patients “really love that tempo” of constant availability. Some patients even use AI as a second opinion, returning to verify their doctor’s advice—a practice some clinicians view positively as it generates better questions.
However, significant privacy and accuracy concerns persist. A 2024 KFF poll found that while 17% of US adults consult AI chatbots monthly for health information, 56% lack confidence in the accuracy of that information. HIPAA protections don’t apply to consumer AI products, meaning people are sharing entire medical histories without legal safeguards. Similarly, attorney-client privilege could be voided if clients input too much case-specific information into chatbots.
OpenAI has attempted to address these concerns by updating policies last fall to specify that ChatGPT cannot provide “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” Despite this, the chatbot continues answering health and law questions. This week, OpenAI launched ChatGPT Health, explicitly designed to “support, not replace, medical care” and not intended for diagnosis or treatment.
The democratization of previously gatekept information has created a double-edged sword. For people who cannot afford upfront legal costs or face doctor shortages, AI tools have helped some win small claims cases and eviction disputes. Yet professionals like California lawyer Golnoush Goharzad report conversations where people believe they have cases to sue landlords simply because “ChatGPT thinks it makes sense.” The consensus among experts: AI is here to stay, and professionals must learn to work alongside it rather than resist it.
Key Quotes
We’re seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways. While that may be true, it doesn’t necessarily translate into a viable case.
Jonathan Freidin, a Miami medical malpractice attorney, describes how AI is creating unrealistic expectations among potential clients who believe they have legal cases based solely on chatbot advice, highlighting the gap between AI-generated information and actual legal viability.
You have to rebuild or build the attorney-client relationship in a way that didn’t used to exist. They don’t realize that there’s so many offshoots along the way that it’s not a linear line from A to Z.
Jamie Berger, a New Jersey family law attorney, explains how AI has fundamentally changed the attorney-client dynamic, requiring lawyers to spend time dispelling generic AI advice and rebuilding trust while explaining the complex, non-linear nature of real legal proceedings.
They really love that tempo of being able to know that ChatGPT never goes away, never goes to sleep, never says no, never says, ‘sorry, your list is too long.’
Hannah Allen, chief medical officer at AI medical scribe tool Heidi, identifies why patients are turning to AI: it offers unlimited availability and time, resources that overworked doctors increasingly cannot provide in a healthcare system with long wait times and rushed appointments.
It’s great that they have the access to a quick second opinion, and then, if it doesn’t agree with me, that allows them to ask me better questions.
Heidi Schrumpf, director of clinical services at teletherapy platform Marvin Behavioral Health, offers a positive perspective on patients using AI to verify professional advice, suggesting that rather than undermining trust, it can enhance the quality of patient-provider dialogue.
Our Take
This article captures a pivotal moment in AI adoption where generative AI transitions from novelty to everyday utility in high-stakes domains. What’s particularly striking is the parallel disruption across both legal and medical fields—two professions historically protected by licensing, expertise barriers, and information asymmetry.
The 57% adoption rate for legal AI advice represents a faster uptake than many predicted, suggesting consumers are willing to trust AI even in consequential matters. The privacy implications are deeply concerning: people are essentially creating permanent records of sensitive information with companies that have no fiduciary duty to protect it.
Most telling is the professional response: rather than outright rejection, experts are learning to integrate AI into their practice models. This pragmatic adaptation suggests AI won’t replace these professionals but will fundamentally reshape their roles—from information providers to interpreters and validators of AI-generated content. The real question is whether licensing bodies and regulators can keep pace with this transformation to protect consumers while enabling beneficial innovation.
Why This Matters
This story represents a fundamental shift in how Americans access professional expertise, with profound implications for the legal and medical industries. The widespread adoption of AI for legal and medical advice—with 57% of consumers willing to use it for legal questions and one-third consulting it weekly for health advice—signals that traditional gatekeeping of professional knowledge is eroding.
For businesses, this trend creates both disruption and opportunity. Companies like Zocdoc face potential disintermediation if AI handles pre-care needs, while AI scribe tools like Heidi benefit from increased adoption. The professional services model itself is being challenged, as clients arrive informed (or misinformed) and expect different interactions.
The privacy implications are significant: people are sharing sensitive medical and legal information with AI systems not covered by HIPAA or attorney-client privilege protections. This creates regulatory gaps that policymakers will need to address. Meanwhile, the accuracy concerns—with 56% of users doubting AI health information—suggest potential liability issues ahead.
Most importantly, this story illustrates AI’s role as a democratizing force that makes expert knowledge accessible to those who cannot afford lawyers or wait months for doctors, while simultaneously creating new challenges around misinformation, privacy, and the changing nature of professional expertise in an AI-augmented world.
Related Stories
- How to Comply with Evolving AI Regulations
- Artificial Intelligence (AI) in Healthcare Market Outlook 2022 to 2028: Emerging Trends, Growth Opportunities, Revenue Analysis, Key Drivers and Restraints
- Google’s ‘Ask for Me’ AI Phone Tool: A Game-Changer for Time Management
- Bluesky CEO Warns Against Over-Reliance on AI for Critical Thinking
Source: https://www.businessinsider.com/chatgpt-new-webmd-doctors-lawyers-medical-advice-2026-1