A recent study published in Nature Machine Intelligence reveals significant concerns about how AI chatbots respond to suicide-related queries. Researchers tested various AI models including ChatGPT, Claude, and Bard, finding inconsistent and potentially harmful responses to questions about self-harm and suicide. The study showed that while some responses were appropriately supportive and included crisis resources, others provided potentially dangerous information or failed to recognize serious risks. The researchers noted that AI chatbots sometimes offered conflicting advice, minimized the severity of mental health concerns, or provided overly simplistic solutions to complex emotional problems. The study particularly emphasized that these AI systems lack proper safeguards and protocols for handling mental health crises, unlike trained human crisis counselors. A key finding was that the same question could receive drastically different responses from the same chatbot when asked multiple times, raising reliability concerns. The researchers recommend implementing stronger safety measures, consistent response protocols, and better integration of mental health resources in AI systems. They also stress that AI chatbots should not be considered substitutes for professional mental health support. The study concludes that tech companies need to work more closely with mental health professionals to improve their AI systems’ responses to crisis situations and ensure they consistently direct users to appropriate human-based support services.