Pages

Friday, January 2, 2026

Tips on using Dr. ChatGPT

Chatbots are increasingly filling the role traditionally occupied by "Dr. Google" as a primary source of medical information for patients and the general public (Figure 1). This shift is driven by their ability to offer a more interactive, personalized, and structured approach to health inquiries compared to search engines. In other words, the long reign of "Dr. Google" is giving way to the conversational efficiency of "Dr. ChatGPT."

Previously I have written about how chatbots can act as a comprehensive knowledge resource, collecting and organizing evidence from various medical guidelines (QH). They can summarize vast amounts of medical information into a user-friendly format, making it easier to digest than sifting through numerous search results.

Furthermore advanced chatbots, such as ChatGPT and Gemini, have shown a surprising level of medical knowledge. Studies indicate they can excel on national medical board exams, far exceeding the passing threshold (QH).

One in six adults now consults generative AI for health insights, and so the patient experience has shifted from static information retrieval to dynamic interactions. Yet this technological leap requires a disciplined user approach: while chatbots excel as assistants capable of translating jargon or prepping for appointments, their can deliver "hallucinations" with the confidence of a board-certified physician. In addition, the currency of information provided by chatbots can be a concern, as their training data might not always be up-to-date. Experts caution that to safely navigate this new era, patients must treat AI as a tool for medical literacy rather than a substitute for clinical judgment.

Thus, it is important for the patient user to learn best practices when conversing with the chatbot for medical purposes, fully appreciating that the tool can provide invaluable summaries, explanations, and suggestions, while at the same time being vulnerable to lapses in accuracy. In a recent article, The New York Times outlines a "harm reduction" framework for these interactions, advising users to mitigate the risk through the following strategies and advice, including how to best prompt (i.e. ask questions, etc.) the chatbot:

1. Practice. Begin your interactions with low-stakes queries rather than during medical emergencies to assess the AI’s reliability without risking your health. For example, test the chatbot against known outcomes.

2. Share context carefully. Strike a balance between clinical detail and privacy by providing the "Goldilocks" zone of information. While rich context (e.g. age, medications, and symptom progression) is essential for accurate analysis, users should remove personally identifiable information like names and insurance IDs.

3. Encourage reflection. Invert the standard dynamic by prompting the AI to ask clarifying questions before it renders a response. Instead of demanding an immediate diagnosis, users should treat initial responses as working hypotheses and explicitly ask, "What additional information do you need?" to force the model to process the full complexity of the clinical picture rather than guessing on limited input.

4. Have the chatbot critique itself. Combat the AI’s tendency toward confident "hallucinations" by employing adversarial prompting techniques that force the model to critique its own conclusions. Demand specific medical citations and ask follow-up questions like "What are the arguments against this conclusion?" to expose potential logic gaps or alternative explanations. It may also be prudent to have a different chatbot (e.g. Gemini) critique the response of the original chatbot (e.g. ChatGPT) for an alternative perspective.

5. Repeat context during long conversations. Mitigate the technical limitations of "context windows" (maximum amount of text a chatbot can read) by frequently asking the chatbot to summarize the medical history provided so far (which repeats previous context and reassures the user that the chatbot is following the conversation). As conversation threads lengthen, AI models are prone to "forgetting" initial constraints like allergies. Periodic check-ins and the use of advanced, paid models (which tend to have longer context windows) can help maintain data continuity and prevent reasoning drift.

6. Use more than one chatbot. For example, ask the same question to both Gemini and ChatGPT. This is akin to obtaining a second opinion. Make sure both are in deep thinking mode. Compare the responses, and as mentioned above, have them critique each other’s responses.

Chatbots are better suited as medical translators or assistants rather than diagnostic oracles, providing the most benefit on educational and preparatory queries. These tools can simplify dense clinical jargon, explain standard physiological mechanisms -- such as how mRNA vaccines work -- or organize thoughts into coherent questions for an upcoming doctor's visit. They excel at summarizing and synthesizing vast amounts of general medical information, making them ideal for demystifying lab reports or drafting wellness routines, although the user should verify the output against established guidelines.

While chatbots like ChatGPT and Gemini have demonstrated the impressive ability to pass medical licensing exams, their performance in real-world clinical scenarios can be inconsistent, which is why they should not be relied on to make diagnoses for patients inputting symptoms. Experts characterize the current technology as "good for textbooks but dangerous for patients," noting that they can "hallucinate" non-existent medical citations or miscalculate pediatric dosages. In complex diagnostic cases involving symptoms alone, error rates can be as high as 70 percent. Because chatbots are trained on textbook-type questions that appear on standardized medical exams, they may falter in real-life vignettes that are outside of this training experience. Human medical professional are much better at extrapolating to these novel situations.

In summary, Dr. ChatGPT is here to stay and will certainly supersede Dr. Google. This powerful technology is still in its infancy, and as a result can pose dangers because of an error rate on certain types of questions (i.e. diagnostic) that may impact one's health in a significant fashion. For now at least "defensive driving" is the best policy for interacting with Dr. ChatGPT so that one can be safe while reaping the benefits of learning more about one's medical condition.
Figure 1. Dr. ChatGPT is almost always available to talk to you about your medical issues, but one should learn how about the strengths and weaknesses of the AI technology as a provider of medical information (Illustration: Maura Losch/Axios).

No comments:

Post a Comment