Artificial intelligence (AI) is rapidly becoming a practical tool that can reshape how clinicians document care. In a recent HIMSS webinar, preserving clinical intent with powerful healthcare AI, Joseph Tutera, CEO of CarePilot, and Steven H. Rube, MD, Chief Medical Officer at IMO Health explored both the opportunities and challenges of deploying AI at the point of care.
At its core, the conversation returned to one idea: clinical intent must be preserved. Documentation that loses specificity creates downstream risks, from coding errors and missed reimbursements to compromised patient safety.
Why specificity matters in clinical documentation
Rube emphasized that “the most accurate way to capture clinical intent is going to be at the point of care… After that, it’s a game of operator – it loses integrity as it goes down the line.” He pointed to diabetes as an example: a vague diagnosis can disrupt care coordination, compromise patient safety, and significantly increase costs for health systems in risk-adjusted payment models.
Tutera added that specificity is “patient safety and financial accuracy,” but acknowledged the burden this places on clinicians. AI’s promise, he noted, is reducing that cognitive load by shifting clinicians from “copywriters” of documentation to “copy editors” of AI-generated notes.
Building (and protecting) clinician trust
Accuracy, however, is non-negotiable. As Rube explained, “Our gut instinct is that as we see hallucinations and errors, we begin to be very reluctant… we can’t do the 80/20 rule in medicine. We shoot for 100% accuracy.”
Tutera agreed, underscoring that trust is cumulative: “A few sloppy mistakes will outweigh a lot of wins.” Transparency about system limitations, straightforward onboarding, and highlighting areas of low confidence are all crucial for adoption.
Reducing errors in AI outputs
One major theme was the importance of grounding AI in trusted clinical terminology. Tutera explained that hallucinations happen when models lack context, but anchoring them in IMO Health’s clinical terminology can reduce those risks: “Those hallucinations happen by not guiding the system with either clear enough instructions or grounding them in enough context.”
Rube presented a real-world scenario: “If I say, ‘How’s your diabetes?’ the word itself is meaningless in medicine… But if the system has context from the chart, it can capture the full diagnosis and not hallucinate.”
Lightening the load for clinicians
Despite limitations, both speakers stressed the upside. Tutera reported that CarePilot saves providers an average of 78 minutes per day in documentation. Longer, more complete notes not only meet regulatory and payer requirements but also restore clinicians’ ability to focus on patients.
Rube shared an anecdote from his own care experience; for the first time, his physician said he could go home without a stack of notes to finish. “When ambient works,” Rube said, “the impact on lifestyle is dramatic.”
The future of clinical documentation and AI
Both leaders agreed we are at an inflection point. For AI to deliver lasting value, it must fit seamlessly into workflows, integrate tightly with electronic health records (EHRs), and adapt to clinicians’ styles rather than forcing radical changes.
As Rube summarized, “These emerging technologies must fit in seamlessly to what our goal is here – quality and efficient healthcare… If it doesn’t make sense from a workflow point of view, no one’s going to use it.”