AI is transforming healthcare, but its effectiveness hinges on the quality of data feeding it. At Hackensack Meridian Health, Chief AI Officer Sameer Sethi is ensuring that AI-driven advancements are rooted in strong data governance, responsible deployment, and human oversight.
“AI isn’t magic,” Sethi said. “It works off of information that exists—about a topic, a person, a specialty. Without the right data, you will produce bad AI.”
Scroll down to watch or listen to the full interview; or subscribe to healthsystemCIO on your favorite podcasting channel
Health systems across the country are exploring ways to leverage AI for clinical and operational efficiencies, but not all implementations are created equal. At Hackensack Meridian Health, AI is integrated into a broader data strategy designed to enhance decision-making, reduce administrative burden, and improve patient outcomes.
Four Key Areas
Hackensack Meridian Health’s AI strategy spans four key areas: data management, insights generation, AI applications, and robotic process automation (RPA). The health system ingests data from more than 50 sources, normalizing it to create a structured and usable foundation for analytics and AI models.
Data normalization is a crucial first step in ensuring AI reliability. Without clean and structured data, even the most sophisticated AI models will generate flawed or misleading outputs. This process, according to Sethi, is one of the most overlooked yet essential components of AI deployment in healthcare.
“I have been a strong proponent of keeping data and analytics under the AI portfolio,” he said. “If you separate them, you create barriers and slow things down.”
Beyond AI, RPA plays a significant role in automating repetitive tasks that do not require human intervention. This technology allows Hackensack to streamline administrative workflows, improve efficiency, and free up staff for higher-value work.
RPA was put to the test during the COVID-19 pandemic when Hackensack needed to submit thousands of FEMA claims in a short timeframe. The traditional approach would have taken months, but by leveraging automation, the health system processed all claims in less than 24 hours.
“We wrote the code, pressed enter, and by the next morning, it was done,” Sethi said. “That’s the power of automation when applied to the right problem.”
While RPA improves operational efficiency, AI has the potential to revolutionize clinical decision-making. However, one of the greatest challenges in deploying AI for clinical use is ensuring that models provide accurate and trustworthy recommendations.
Addressing AI Hallucinations and Human Oversight
One of the most pressing concerns surrounding AI adoption in healthcare is hallucination—the tendency of generative AI models to fabricate information when faced with gaps in their training data. These inaccuracies can pose significant risks in clinical settings, where precision and reliability are paramount.
“Generative AI has a problem with saying, ‘I don’t know,’” Sethi explained. “When that happens, it makes things up based on what it knows, and sometimes those things aren’t factual.”
To mitigate hallucinations, Hackensack employs techniques such as fine-tuning AI models and retrieval-augmented generation (RAG), which cross-checks AI outputs against trusted data sources. By refining how AI models generate responses, the health system can reduce—but not completely eliminate—hallucinations.
“The way these large language models are built means that hallucinations will always be a factor,” Sethi said. “There are ways to reduce them significantly, but you can’t remove them entirely.”
This challenge has fueled ongoing debate about whether AI-driven clinical tools should always have a human in the loop. While human oversight is often necessary to ensure accuracy, it can also slow down processes and limit scalability.