The article discusses the potential risks associated with using AI-powered transcription tools in healthcare settings. Researchers found that an AI transcription tool used by some hospitals to generate written records from conversations with patients frequently made up fictional events and statements. The tool, which was trained on medical dialogue data, exhibited a phenomenon known as ‘hallucination,’ where it fabricated information not present in the original conversation. This raises concerns about the accuracy and reliability of AI-generated medical records, which could potentially lead to misdiagnoses or improper treatment. The researchers emphasize the need for rigorous testing and validation of AI systems before deploying them in critical domains like healthcare. While AI transcription tools offer potential benefits, their propensity for hallucination highlights the importance of human oversight and fact-checking to ensure patient safety and data integrity.