15 February, 2026
chatgpt-health-offers-cardiac-reviews-raises-serious-concerns

The launch of ChatGPT Health has stirred considerable attention, as it claims to analyze personal health data from fitness trackers and medical records to provide insights on individual well-being. This service, part of OpenAI’s suite, reportedly allows users to “understand patterns over time” in their health data. However, early experiences raise significant concerns about the accuracy and reliability of its assessments.

One user, who connected their Apple Health data to the platform, received a failing grade for cardiac health. Despite recording over 29 million steps and 6 million heartbeat measurements, the AI deemed the user’s heart health as an F. Alarmed by this assessment, the individual sought a second opinion from a medical professional, who reassured them that they were at low risk for heart disease and noted that their insurance likely would not cover additional cardio testing.

Dr. Eric Topol, a cardiologist at the Scripps Research Institute, echoed concerns regarding the accuracy of AI-generated health advice. He stated, “It’s baseless,” emphasizing that such tools are not currently equipped to deliver reliable medical guidance.

Limitations of AI Health Assessments

The use of AI in healthcare has the potential to enhance access to medical insights, yet the performance of ChatGPT Health exemplifies the risks associated with relying on technology for personal health analysis. ChatGPT Health, along with a competing service from Anthropic called Claude, both utilize user data from fitness apps to provide health assessments. While the intention is to assist users in recognizing health trends, the assessments may not reflect a complete picture of an individual’s health.

The user noted that when they connected their medical records, ChatGPT adjusted their cardiac health score from an F to a D. Dr. Topol criticized the reliance on metrics such as VO2 max, which he argued are often inaccurate. Despite the AI having access to vital health indicators like weight and blood pressure, its evaluations were largely based on estimates that could fluctuate significantly.

Both OpenAI and Anthropic stress that their AI tools are not substitutes for professional medical advice. Yet, they have provided definitive health scores that can lead users to draw erroneous conclusions about their health status.

Data Privacy and the Challenge of Accuracy

As users share sensitive health information, concerns regarding privacy and data security remain paramount. OpenAI asserts that its platform implements measures to protect user data, such as encryption and assurances that data will not be used to train its models. However, the absence of regulation under laws like HIPAA, which governs healthcare data, presents risks for users.

The inconsistency in ChatGPT’s assessments further complicates matters. The user reported fluctuating grades for their heart health, with scores varying between F and B across different inquiries. These discrepancies highlight a troubling aspect of AI health evaluations, which can lead to unnecessary anxiety or false reassurance.

Both AI companies have acknowledged the early testing phase of their products but have not outlined clear strategies for enhancing their analytical capabilities. Users are left to navigate these uncharted waters, often without sufficient guidance on how to interpret the results.

In conclusion, while AI tools like ChatGPT Health and Claude aim to democratize access to health insights, their current implementations raise questions about accuracy and reliability. As technology continues to evolve, the medical community, along with regulatory bodies, must work to ensure that AI-driven health assessments are grounded in sound medical knowledge and provide users with trustworthy information.