Mark E. Lutes and Richard H. Hughes, IV, Members of the Firm in the Health Care & Life Sciences practice, co-authored an article in Health Affairs, titled “When the Front Door Is an Algorithm: Guiding Consumer Use of AI in Health Care.”
Following is an excerpt:
On January 7, 2026, OpenAI announced ChatGPT Health, a new feature that allows users to review medical test results, prepare for doctor appointments, and seek guidance on diet and fitness—while explicitly stopping short of making diagnoses. The tool can connect to electronic medical records, wearable devices, and wellness apps such as Apple Health and MyFitnessPal, marking the company’s most direct entry yet into the health care sector.
The announcement matters not because it is novel, but because it formalizes something that has already been happening at scale: Consumers are increasingly relying on artificial intelligence (AI) as a first stop for health information and decision support. What was once informal, improvised, and largely invisible is now being productized, integrated with medical records, and presented as a legitimate companion to the health care system.
This moment signals a shift in where health care begins. For many consumers, the front door is no longer a clinician’s office, nurse line, or even a patient portal. It is an algorithm—available instantly, conversational by design, and increasingly connected to personal health data.
Consumer adoption of AI in health care should be viewed in the context of a US health system under strain: Workforce shortages, rising costs, long wait times and administrative burden have eroded access and continuity. Patients are frequently left to navigate uncertainty between visits, interpret dense documentation, and make decisions with incomplete support. AI fills this gap at just the moment in time when consumers have grown accustomed to digital services that are responsive, personalized, and always on.
AI answers questions at 2 a.m., translates clinical language into plain terms, and helps consumers decide whether something feels urgent—or can wait. For patients managing chronic conditions, coordinating care for family members, or attempting to understand bills, formularies, and prior authorizations, conversational AI can feel less like a novelty and more like basic infrastructure.
But the same traits that make AI attractive also make it risky. AI is persuasive. It speaks in fluent sentences, mirrors user language, and provides confident-sounding explanations—often without the full clinical context needed to be safe. In health care, where small misunderstandings can cascade into harmful choices, the difference between “useful” and “unsafe” is not academic.
From Information To Influence: Why Safety Is Not Optional
Health policy has long recognized that medical information is different from ordinary consumer information because it influences behavior in high-stakes conditions. AI heightens that influence. A conversational interface does not merely display facts; it frames decisions. It can reassure, escalate, or subtly steer.
This is why the core challenge of this transformative moment is not only whether AI outputs are “accurate” in some abstract sense. The challenge is also whether AI helps consumers make responsible decisions in the face of uncertainty—especially when symptoms are ambiguous, when risk is non-zero, and when the right answer is often “it depends.”
A consumer may ask: Is this medication side effect normal? Is this lab value dangerous? Can I treat this at home? Should I go to urgent care? Can I stop my statin? Those are not trivial questions. They implicate medical history, current medications, comorbidities, prior results, and sometimes an understanding of what is not being reported.
If we treat consumer-facing AI as a general information product, we will miss what makes it powerful—and potentially hazardous—in health care. These tools increasingly operate as a form of informal triage and decision support, even when they claim not to. Their success depends on trust. Their harm can occur quietly, one missed escalation or overconfident reassurance at a time.
The Missing Ingredient: Clinical Context From Medical Records
Up to this point, most health care consumer AI tools had a major limitation: They operate without access to longitudinal clinical data. They rely on a user’s recollection of diagnoses and medications, plus generic medical knowledge. That is not enough for safe guidance in many cases.
Integrating AI with medical records—when done deliberately and with appropriate consent—could materially improve safety and usefulness. Consider a consumer reviewing a lab result. Without context, an AI may offer a general description and reference ranges. With context—recent symptoms, diagnosis history, relevant medications, prior labs—the tool can provide a more tailored explanation: what changed, what is stable, what might be medication-related, what is likely benign, and what warrants contact with a clinician.
Similarly, when a consumer asks whether a symptom is concerning, access to records could reduce risky “guessing” by enabling the AI to incorporate known conditions (e.g., diabetes, immunosuppression, pregnancy), allergies, anticoagulant use, recent surgeries, or abnormal vitals captured in clinical settings.
But record integration also introduces new risk. If a tool is connected to personal data, the user may infer that the tool is providing clinical-grade advice or has effectively “reviewed their chart” the way a clinician would. That perception can cause overconfidence. It can also create confusion about responsibility: If an AI tool sees a red-flag value and fails to escalate, who is accountable?
The goal should be purposeful integration: using records to improve context for education, navigation, and appropriate escalation—without presenting the tool as a substitute for professional judgment.
People
- Member of the Firm
- Chair—Board of Directors / Member of the Firm