
A man has been hospitalized with severe psychiatric symptoms after adhering to health advice from an artificial intelligence chatbot. This incident highlights the potential dangers of relying on AI for medical guidance. The patient sought to improve his health by decreasing his salt intake and turned to ChatGPT for recommendations on a substitute. Unfortunately, the chatbot suggested sodium bromide, a chemical more suitable for cleaning than culinary purposes.
The man ordered sodium bromide online and integrated it into his diet. While sodium bromide can serve as a substitute for sodium chloride, the AI failed to provide essential context regarding its safety and appropriate usage. Three months later, he arrived at the emergency department displaying paranoid delusions, convinced that his neighbor intended to poison him.
According to medical personnel, “In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability.” After receiving treatment with anti-psychotic medications, the patient was able to articulate the AI-driven dietary changes he had made. This information, combined with test results, led to a diagnosis of bromism, a rare condition resulting from the toxic accumulation of bromide in the body.
Normal bromide levels in healthy individuals are typically below 10 mg/L, but the patient’s levels were alarmingly measured at 1,700 mg/L. Bromism was once a common issue in the early 20th century, accounting for up to 8 percent of psychiatric admissions. However, its prevalence significantly declined during the 1970s and 1980s as bromide-containing medications were phased out.
The patient received treatment for three weeks and was subsequently released without major complications. This case serves as a reminder that emerging AI technologies are not a replacement for professional medical expertise, particularly in critical health matters.
The authors of the study, published in the journal Annals of Internal Medicine: Clinical Cases, emphasize the importance of caution when using AI for health advice. They state, “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.” They further assert that it is “highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride.”
This incident underscores the need for individuals to consult qualified healthcare professionals rather than relying solely on AI for health-related decisions.