Chasing Real Impact with GenAI in Healthcare:
Part 2 – Clinic Case Study with MedAsk
April 28th, 2025 | Klemen Vodopivec, Nejc Završan, Rok Vodopivec
Introduction
In Part One of this two-part blog series, we explored how generative AI (GenAI) is being adopted by healthcare professionals, highlighted emerging use cases such as digital triage, and shared our insights from interviews with clinicians on how tools like MedAsk can enhance workflows without adding friction.
Now in Part Two, we present results from a three-month case study where MedAsk was deployed in a family practice setting. Through rigorous patient questionnaires and feedback collection, we’ve gained valuable insights into how AI can meaningfully enhance the patient experience while supporting clinical workflows.
Methodology
We conducted a pilot feasibility study at a single family clinic in Slovenia, running from January to March 2025. This study was modeled after a previous symptom checker study but focused specifically on evaluating patient satisfaction and perceived impact of MedAsk in a clinical setting. Secondary objectives included assessing willingness for future use and identifying factors affecting patient experience.
The participating general practitioner (GP) office was provided with a dedicated laptop on which MedAsk was accessible. Patients in the waiting room who were willing to participate were invited to use the laptop. Since the same device was shared among multiple patients, strict hygiene protocols and infection prevention strategies were followed between uses.
When patients opened the MedAsk application, they were first presented with a detailed information section explaining the study. At the end of this section, they were asked to provide informed consent. If a patient chose ‘I do not agree’, MedAsk did not proceed with the assessment.
After providing consent, patients completed the MedAsk symptom assessment while waiting for their appointment. Medical staff were available for technical assistance when needed. To ensure consistent testing conditions, MedAsk use was limited to the clinic setting only, and each patient participated only once.
Once a patient completed their assessment, their responses were securely transmitted to the GP through a purpose-built online dashboard, which displayed the information as a digital anamnestic sheet. The GP reviewed this information before and during the patient consultation. Access to patient data was restricted exclusively to the treating physician to maintain privacy and data security.
Following their medical visit, patients received an anonymous paper questionnaire to evaluate their experience and satisfaction with MedAsk. The questionnaire also collected demographic information and feedback on potential future use cases. A sample of the full questionnaire (in the original Slovenian) is available in the Appendix.
The study was not registered as a clinical trial, as it was designed solely as a usability study and did not involve any direct medical interventions on human participants.
Results
A total of 28 patients participated in the study. Of these, 20 patients completed the follow-up questionnaire and were included in the final analysis.
Patient Demographics and Health Data
Among the participants, 55% were male and 45% female. The average age was 35.4 years, ranging from 26 to 52 years.
When asked how they typically seek health information, the majority of patients (16 respondents) reported that they consult their doctor. This was followed by 8 patients who use general web searches (e.g., Google), 7 patients who visit health-related websites, and 4 patients who ask family members. Interestingly, two respondents who selected “other” specified that they use ChatGPT for health information.
Most participants reported positive health status, with 35% describing it as very good, 40% as good, and 25% as neutral. The primary reasons for clinic visits during the study period were pulmonary issues and orthopedic concerns (4 patients each), followed by dermatological conditions (3 patients).
Following their consultations, treatment recommendations varied: 7 patients received self-care advice, 7 were directed toward further diagnostic testing, 6 received specialist referrals (4 of these being fast-track referrals), and 4 were prescribed medication.
Patient Experience with MedAsk
Prior Experience and General Satisfaction
The vast majority (85%) of patients had never used a tool similar to MedAsk before, while 5% had used such a tool once and 10% multiple times.
Overall satisfaction with MedAsk was notably high, with 75% of patients reporting being either “very satisfied” or “rather satisfied” with their experience (Figure 1). The most frequently cited reasons for satisfaction included:
- Clear and understandable questions
- Fast and efficient process
- Provided with relevant information about their symptoms
Instances of dissatisfaction were less common and were primarily related to a preference for direct interaction with a real doctor.

Impact on Clinical Visit
An impressive 80% of patients reported that MedAsk had a “rather positive” or “very positive” impact on their medical visit quality (Figure 2). Those noting a positive impact primarily attributed this to better preparation for their consultation and more streamlined discussions with their doctor.
Nearly all patients (94%) felt their doctor effectively incorporated the information from MedAsk during their consultation. Regarding visit duration, most patients (78%) reported no effect on appointment length, while 11% felt their visit was longer than expected and 11% shorter.

Future Use Intentions
When asked about potential home use, 60% of patients expressed interest in using MedAsk for initial assessment of health concerns (Figure 3). Key reasons included its efficiency, access to quality health information, and stress reduction before doctor visits. The 25% who would not use it at home primarily cited preference for direct physician contact.

Similarly, 60% indicated they would likely use MedAsk for appointment booking in the future, replacing current methods like email or phone calls (Figure 4). The main reasons for interest in using MedAsk for booking appointments were that it felt fast and efficient, helped with preparation for the visit, and provided initial information ahead of time. Among the 15% of patients who were not likely to use MedAsk for this purpose, the most common reasons included satisfaction with the current system, a preference for direct contact with the doctor, and skepticism toward AI.

Recommendation Potential
Just over half (53%) of patients said they would recommend MedAsk to others (Figure 5), citing efficiency, ease of use, and health information quality. The 16% who would not recommend it primarily felt the information was too general or impersonal compared to human interaction.

Conclusion
We believe that understanding patient experiences is essential for building a product that truly serves their needs, which is why we conducted a usability study with patient questionnaires to gain insights into MedAsk’s utility in a real-world clinical setting.
The study reveals that while most participants were unfamiliar with tools like MedAsk, their perception was overwhelmingly positive. The results surpassed those of the symptom checkers tested in the study this research was modeled after. This demonstrates that beyond superior diagnostic and triage accuracy, MedAsk’s user experience outperforms rule-based symptom checkers.
Particularly encouraging are the high number of patients reporting positive impacts on their medical visits and expressing high general satisfaction rates, indicating that integration into healthcare systems would be beneficial from their perspective. Moreover, although participants used MedAsk in a clinical setting, more than half expressed a willingness to use it at home for health information or appointment booking, highlighting significant potential for home use. Patients primarily valued MedAsk for its speed, clear questioning, and quality health information.
While these findings are promising, we acknowledge the limitations of our study. The small sample size of 20 respondents limits the generalizability of our conclusions. Additionally, budget and time constraints focused our research on usability rather than clinical outcomes such as diagnostic and triage accuracy compared to physician assessment. These represent important directions for future research.
As we continue refining MedAsk, these patient insights will guide our development process, ensuring that technology enhances rather than replaces the essential human connection in healthcare delivery.