1. EachPod

Providence’s Shah Focused on Uptake of Digital Tools; Wendt’s Study Indicates Progress

Author
Anthony Guerra
Published
Wed 03 Sep 2025
Episode Link
https://healthsystemcio.com/2025/09/03/providences-shah-focused-on-uptake-of-digital-tools-wendts-study-indicates-progress/

At Providence, the conversation about digital health has shifted from turning systems on to making sure clinicians actually use them. That distinction is central to the collaboration between Maulin Shah, MD, CMIO, Providence, and Staci Wendt, PhD, Director, Providence Health Research Accelerator: one side designs and scales rollouts; the other studies whether those rollouts work in the real world—and why. Together, the teams are building a playbook that privileges measured use over mere implementation at one of the country’s largest health systems, which spans 52 hospitals and more than 1,000 clinics.

Providence’s informatics organization aims to reduce burden and improve care through targeted, evidence-based deployments of clinical technology. The emphasis, leaders said, is on selecting interventions that can scale across a system the size of Providence without linear growth in resources, and on pairing each deployment with a plan to drive uptake.

Shah offered a simple analogy: “Having a Ferrari sitting in the garage is not really helpful,” he said, adding that high-end tools left idle create no value.

That is where the Providence Health Research Accelerator (HRA) fits. The HRA partners with informatics and operational leaders to bring research rigor—randomization where feasible, longitudinal surveys, and objective EHR measures—to questions of clinician burden, patient experience and workflow design. The shared aim: translate promising tools into sustained use, then refine playbooks based on what the data say most improves adoption at scale.



What the DAX Study Measured—and Found

Before expanding ambient documentation, Providence ran a randomized, staged rollout of DAX Copilot. Clinicians assigned to early access received training and used the tool for roughly six months; all participants completed monthly surveys on burnout and frustration with documentation, while objective EHR metadata tracked time in notes after visits and so-called “pajama time.” In aggregate, the analysis associated the tool with less after-hours work and lower self-reported frustration. As Wendt put it: “We found about a two and a half hour a week drop in, or difference in, the pajama time for our clinicians who were using [DAX] compared to our clinicians who were not.”

The research design intentionally paired subjective and objective measures. Teams collected usage patterns, looked for subgroups most likely to benefit (for example, those with heavy documentation demand), and triangulated findings across data sources. The method also surfaced confounders that can mask impact—such as staffing churn in a clinic—preventing false negatives or over-generalization from a single site.

Turning Evidence into Uptake

Operationalizing those findings falls to Shah’s team of clinical informaticists—about 500 strong—who own training, at-the-elbow support, coaching and success metrics across service lines. The team uses usage data, local medical leadership input, and “heat maps” to prioritize where one-to-one help will yield the most benefit. That capacity is pivotal because the organization still sees predictable adoption curves: early enthusiasts, a hesitant middle that needs coaching, and a trailing cohort that may never fully engage.

The coaching model reflects a pragmatic view of complexity. “It’s not plug and play… it’s not like pick it up and you’re just going to be an awesome expert,” Shah said, arguing that power users get more value, but they get there through targeted support—not assumption. New features that increase capability can also raise cognitive load, which in turn argues for hands-on enablement beyond a quick start guide.

Measurement is the key to improvement. Leaders cautioned against equating “users” with “impact”; instead, teams should watch time-in-documentation, satisfaction and other operational KPIs, and distinguish monitoring (“How are we doing?

Share to: