Chat GPT Health: Expert take on AI’s role in modern medicine

Evan Walker
Evan Walker TheMediTary.Com |
The shadow of a person using their smartphone.Share on Pinterest
How will ChatGPT Health impact medical guidance and discussions with healthcare professionals? Image credit: Heng Yu/Stocksy

Open AI recently introduced ChatGPT Health, a new feature within ChatGPT designed to support health and wellness questions. The launch is in response to the AI chatbot reportedly receiving millions of health-related questions each day, highlighting a growing public interest in AI-powered medical information.

According to OpenAI, ChatGPT Health aims to provide users with a more focused experience for navigating health concerns, wellness topics, and medical questions.

Clearly, there is an increasing demand for accessible and conversational health information. However, while the tools may increase access to information, ensuring accuracy, equity, and responsible use remains a critical challenge.

Speaking to Medical News Today, David Liebovitz, MD, an expert in artificial intelligence in clinical medicine at Northwestern University, shares his thoughts on how ChatGPT Health may affect the patient-doctor relationship, and how healthcare professionals (HCPs) can safely guide appropriate use of AI health tools.

The main misconception is that any conversation about health is protected like a conversation with their doctor. It is not.

HIPAA only covers “covered entities,” which means health plans, healthcare clearinghouses, and healthcare providers who transmit health information electronically. Consumer AI tools are not covered entities.

Therefore, when you share Health information with ChatGPT, that data could theoretically be subpoenaed, accessed through legal processes, or, despite OpenAI’s stated policies, used in ways you did not anticipate.

There is nothing like patient-physician privilege. For sensitive Health matters, particularly reproductive or mental Health concerns in the current legal environment, that distinction matters.

Liebovitz: Absolutely. Mental Health carries unique risks: AI chatbots have been implicated in multiple suicide cases where they validated harmful ideation rather than escalating appropriately.

The Brown University study published last year documented systematic ethical violations, including reinforcing negative beliefs, creating false empathy, mishandling crisis situations. LLMs are not designed to recognize decompensation.

Reproductive care carries legal risk in addition to clinical risk. In states with abortion restrictions, any digital record of reproductive health questions becomes potential evidence.

Unlike conversations with your physician, ChatGPT conversations are not protected by legal privilege. I would also add: substance use, HIV status, genetic information, anything involving legal proceedings. The common thread is linking scenarios where disclosure, even inadvertent, carries consequences beyond clinical care.

TAGGED:
Share this Article