MedAsk

AI Has Entered the Patient's World —Are We Ready?

December 10th, 2024 | Klemen Vodopivec

Introduction

“When my son screams in pain and no one can tell me why, I feel helpless,” Courtney wrote in her journal. For three years, she watched her son Alex endure unexplained chronic pain that began during the COVID-19 lockdown. Despite consulting 17 different medical specialists, each examination ended the same way – with none of them being able to pinpoint the cause.

Desperate for help, Courtney turned to an unexpected ally: an AI language model. She methodically input Alex’s symptoms and MRI results into the system, hoping for any insight that might help her 4-year-old son. The AI’s response would change their lives: tethered cord syndrome. With this newfound insight, she advocated for her son with renewed determination. A neurosurgeon later confirmed the AI’s hypothesis, leading to a successful surgery that finally put Alex on the path to recovery.

Dr. Chatbot is Going Mainstream

Alex’s story isn’t unique – it represents a growing trend where patients are turning to approachable AI tools to make sense of their medical conditions, and stories like Alex’s are increasingly capturing public attention. A recent survey by the nonprofit health policy research organization KFF highlights this change: about one in six adults (17%) now use AI chatbots at least once a month to seek health information and advice. Among adults under 30, this figure rises to one in four (25%).

When mapped onto the innovation adoption curve, these statistics suggest that AI in healthcare has moved beyond the early adopters and is now entering the early majority phase. Patients aren’t just experimenting with AI tools anymore—they are integrating them into their efforts to understand and manage their health. This marks a profound shift in how people access and use healthcare information.

AI adoption
AI for Health Information Seeking Is Entering the Majority Adoption Phase

Why Are Patients Turning to AI?

The rapid pace of generative AI adoption can be difficult to grasp, but it raises a critical question: why are so many patients embracing this technology?

At its core, the appeal stems from patients’ desire to feel in control of their health. The current healthcare system, burdened by limited access and systemic errors, often leaves them feeling powerless. Each year, an estimated 795,000 Americans experience permanent disability or death due to diagnostic mistakes— a sobering reminder of what’s at stake when answers are elusive.

Until now, patients have primarily relied on Google to educate themselves about their medical conditions. But while search engines do provide access to medical information, they often overwhelm users with disconnected facts that they must somehow piece together themselves. Generative AI (GenAI) offers something fundamentally different: it doesn’t just provide access to information—it enables access to clinical thought.

Here are some concrete examples of how patients benefit from this AI-enabled clinical reasoning:

  • 24/7 Availability and Patience: Unlike rushed medical appointments, AI offers round-the-clock availability with no time constraints. Patients can explore their concerns without fear of judgment or worry about wasting time, creating a safe and pressure-free space to learn and understand their conditions.
  • Cross-Disciplinary Insights: Medical specialists excel in their domains but often work in silos, which can leave gaps in addressing complex, multi-disciplinary conditions. AI bridges these gaps by synthesizing information across medical specialties, helping to uncover patterns and connections that might otherwise go unnoticed.
  • Advocacy Made Easier: When patients better understand their conditions, they become more effective advocates for their own care. AI helps them explore their symptoms, prepare focused questions for doctors, and understand complex medical terminology. This knowledge translates into more productive medical appointments and better-informed healthcare decisions.

Tackling Safety Challenges in Patient-Facing GenAI

Whether we like it or not, patients are already using generative AI for their health issues. This raises important questions about safety, accuracy and reliability of the information they provide—questions that must be addressed. The KFF survey highlights the urgency of these concerns: even among AI users, a majority (56%) lack confidence in the accuracy of health information provided by AI chatbots.

The challenge isn’t simply that AI can provide incorrect or incomplete information—this is a problem patients already face when using search engines like Google. The real concern lies in how AI delivers its answers. Chatbots often respond with an air of authority that can give patients false confidence in their advice. Unlike the fragmented and spotty information found through traditional online searches, AI responses are often persuasive and appear far more complete, even when they aren’t.

Considering these concerns, developers of consumer-facing health GenAI applications must take special precautions to ensure their tools are safe and reliable. A recent viewpoint paper published in The Lancet provides a framework of critical principles to guide responsible development. These principles establish a comprehensive approach to patient safety:

  • Informing, Not Driving, Decisions: Designing AI tools to support health decision-making rather than substituting it.
  • Restricting Response Scope: Limiting the range of AI responses to ensure they align with the tool’s intended purpose.
  • Harm Prevention: Incorporating safeguards to prevent harmful advice and avoid misleading users.
  • Using Reliable Medical Sources: Building models based on validated and trustworthy medical data.
  • Rigorous Testing and Validation: Conducting extensive testing both before and after deployment.

Our Approach to Building Safe GenAI for Patients

At MedAsk, we are developing a specialized second layer, often referred to as a custom cognitive architecture, on top of a foundation model to deliver safe and effective health information. This architecture includes:

  • Fine-Tuning on Clinical Guidelines: Training the foundation model with data specific to the target use case ensures contextually appropriate responses.
  • Integrating a Symptom-Disease Knowledge Base: A structured, data-driven knowledge base grounds the model’s responses in validated medical information.
  • Rigorous Benchmarking: Both external and internal testing demonstrate that LLM-based approaches already outperform rule-based symptom checkers.
  • Application logic: Custom prompting and control mechanisms establish appropriate interactions between users and our system.
  • Continuous monitoring and transparency: We collect and analyze user feedback to improve our tool. At the same time, we maintain clear disclaimers about its capabilities and limitations.
  • Moderation and safeguards: Automatic flagging systems for both inputs and outputs help identify and prevent potentially harmful or inappropriate content.
medask-second-layer
MedAsk: A Specialized Second Layer Built on Foundation Models

Conclusion

The jury is still out on GenAI, with the KFF survey revealing that nearly half of the public (49%) remains uncertain about its impact on health information. However, we believe healthcare stands at a pivotal moment, with LLMs offering a groundbreaking opportunity to deliver world-class health information to millions. As with any transformative technology, these advances come with challenges – challenges that we at MedAsk embrace. By combining GenAI with specialized medical knowledge, rigorous research, and a steadfast commitment to putting patients first, we are dedicated to creating a solution that is both safe and reliable, empowering people to take control of their health.