As AI gains traction in healthcare, its role must be carefully integrated to support clinicians without eroding patient trust. At UCI Health, CMIO and VP of Clinical Informatics Deepti Pandita, MD, believes AI should function as an “integral member of the care team” rather than a disruptive or opaque force.
Scroll down to watch or listen to the full interview; or subscribe to healthsystemCIO on your favorite podcasting channel
“Any future-thinking health system should see AI as an enhancement—not a replacement—for clinical decision-making,” Pandita said. “We must preserve the core patient-clinician relationship while using AI to improve efficiency and outcomes.”
Balancing Innovation with Trust
While AI promises to reduce administrative burdens and physician burnout, its implementation requires careful governance. The challenge, Pandita noted, lies in ensuring AI adoption does not diminish trust among clinicians or patients. “The entire healthcare ecosystem depends on trust,” she emphasized. “Patients are placing their faith in their providers, and if we introduce AI without transparency, that trust can be eroded.”
To maintain confidence, patients should understand AI’s role in their care. Health systems must communicate how AI-driven tools assist physicians rather than replace clinical judgment. “Patients need to know AI is not making decisions independently—clinicians are always in the loop,” she said. “It’s not just about deploying AI; it’s about how we communicate its value and limitations.”
Transparency and education must go hand in hand. If patients perceive AI as a faceless decision-maker, they may resist its use. However, when positioned as an assistive tool that enhances care quality, AI adoption can be more widely accepted.
The Complexity of Opt-Out Scenarios
As AI becomes embedded in healthcare operations, the question arises: Can patients opt out? Pandita acknowledged the operational challenges of allowing patients to decline AI use in their care. “There will come a time when AI is a fundamental part of everything we do,” she said. “If patients opt out, where will they go? Every health system will be using AI in some capacity.”
Even within a single organization, fully tracking AI usage can be difficult. “AI is already assisting with back-office functions like billing and coding. Clinicians and patients may not even realize where AI is being used,” she noted.
Furthermore, health systems must consider the implications of offering AI opt-out options. “If a patient declines AI, does that mean their claims won’t be processed using AI-assisted coding tools? It quickly becomes impractical,” Pandita explained.
Rather than broad opt-out policies, she suggests a more nuanced approach where patients are informed about AI’s role in specific aspects of their care. “For example, we already obtain consent for ambient documentation tools, and I have yet to encounter a patient refusing them,” she noted.
Addressing IT-Driven Burnout
While EMRs have been historically linked to clinician burnout, Pandita noted that other factors—such as workflow inefficiencies and administrative overload—are now primary contributors. “It’s not just the EMR anymore,” she said. “Lack of autonomy, chaotic work environments, and schedule inflexibility are driving burnout. AI can help alleviate some of this burden, but only if implemented thoughtfully.”
Customization, she argued, is essential for effective technology adoption. “You can’t take an out-of-the-box solution and expect it to work,” she explained. “User-centered design is critical. Technology should fit workflows, not the other way around.”
At UCI Health, ensuring AI aligns with clinical workflows is a top priority. “Some processes simply don’t have a suitable IT solution yet,” she said. “Rather than forcing an ill-fitting technology, we sometimes need to keep certain workflows manual until the right tool ...