Mark E. Lutes and Richard H. Hughes, IV, Members of the Firm in the Health Care & Life Sciences practice, co-authored an article in Health Affairs, titled “When the Front Door Is an Algorithm: Guiding Consumer Use of AI in Health Care.”

Following is an excerpt:

On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that allows users to review medical test results, prepare for doctor appointments, and seek guidance on diet and fitness—while explicitly stopping short of making diagnoses. The tool can connect to electronic medical records, wearable devices, and wellness apps such as Apple Health and MyFitnessPal, marking the company’s most direct entry yet into the health care sector.

The announcement matters not because it is novel, but because it formalizes something that has already been happening at scale: Consumers are increasingly relying on artificial intelligence (AI) as a first stop for health information and decision support. What was once informal, improvised, and largely invisible is now being productized, integrated with medical records, and presented as a legitimate companion to the health care system.

This moment signals a shift in where health care begins. For many consumers, the front door is no longer a clinician’s office, nurse line, or even a patient portal. It is an algorithm—available instantly, conversational by design, and increasingly connected to personal health data.

Consumer adoption of AI in health care should be viewed in the context of a US health system under strain: Workforce shortages, rising costs, long wait times and administrative burden have eroded access and continuity. Patients are frequently left to navigate uncertainty between visits, interpret dense documentation, and make decisions with incomplete support. AI fills this gap at just the moment in time when consumers have grown accustomed to digital services that are responsive, personalized, and always on.

AI answers questions at 2 a.m., translates clinical language into plain terms, and helps consumers decide whether something feels urgent—or can wait. For patients managing chronic conditions, coordinating care for family members, or attempting to understand bills, formularies, and prior authorizations, conversational AI can feel less like a novelty and more like basic infrastructure.

But the same traits that make AI attractive also make it risky. AI is persuasive. It speaks in fluent sentences, mirrors user language, and provides confident-sounding explanations—often without the full clinical context needed to be safe. In health care, where small misunderstandings can cascade into harmful choices, the difference between “useful” and “unsafe” is not academic.

From Information To Influence: Why Safety Is Not Optional

Health policy has long recognized that medical information is different from ordinary consumer information because it influences behavior in high-stakes conditions. AI heightens that influence. A conversational interface does not merely display facts; it frames decisions. It can reassure, escalate, or subtly steer.

This is why the core challenge of this transformative moment is not only whether AI outputs are “accurate” in some abstract sense. The challenge is also whether AI helps consumers make responsible decisions in the face of uncertainty—especially when symptoms are ambiguous, when risk is non-zero, and when the right answer is often “it depends.”

A consumer may ask: Is this medication side effect normal? Is this lab value dangerous? Can I treat this at home? Should I go to urgent care? Can I stop my statin? Those are not trivial questions. They implicate medical history, current medications, comorbidities, prior results, and sometimes an understanding of what is not being reported.

If we treat consumer-facing AI as a general information product, we will miss what makes it powerful—and potentially hazardous—in health care. These tools increasingly operate as a form of informal triage and decision support, even when they claim not to. Their success depends on trust. Their harm can occur quietly, one missed escalation or overconfident reassurance at a time.

The Missing Ingredient: Clinical Context From Medical Records

Up to this point, most health care consumer AI tools had a major limitation: They operate without access to longitudinal clinical data. They rely on a user’s recollection of diagnoses and medications, plus generic medical knowledge. That is not enough for safe guidance in many cases.

Integrating AI with medical records—when done deliberately and with appropriate consent—could materially improve safety and usefulness. Consider a consumer reviewing a lab result. Without context, an AI may offer a general description and reference ranges. With context—recent symptoms, diagnosis history, relevant medications, prior labs—the tool can provide a more tailored explanation: what changed, what is stable, what might be medication-related, what is likely benign, and what warrants contact with a clinician.

Similarly, when a consumer asks whether a symptom is concerning, access to records could reduce risky “guessing” by enabling the AI to incorporate known conditions (e.g., diabetes, immunosuppression, pregnancy), allergies, anticoagulant use, recent surgeries, or abnormal vitals captured in clinical settings.

But record integration also introduces new risk. If a tool is connected to personal data, the user may infer that the tool is providing clinical-grade advice or has effectively “reviewed their chart” the way a clinician would. That perception can cause overconfidence. It can also create confusion about responsibility: If an AI tool sees a red-flag value and fails to escalate, who is accountable?

The goal should be purposeful integration: using records to improve context for education, navigation, and appropriate escalation—without presenting the tool as a substitute for professional judgment.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.