Achieving scale in health IT: The power of terminology-based normalization solutions

When looking for ways to improve and optimize ETL or ELT, data and analytics vendors may consider healthcare data standardization supported by terminology.
Dale-Doodle-02

How many data platform vendors do you know that have managed to scale their solution across the healthcare industry?

Scale = Affordable x Profitable x Use Case Adaptable


Those of us in the trenches of healthcare data know that the majority of a data analyst’s time is spent on some flavor of Extract, Transform, and Load (ETL) or Extract, Load, and Transform (ELT) – moving data from one data model and terminology into another data model and terminology.

Unfortunately, data integration is laborious and lossy. If you’re lucky, you can make it work inside a single enterprise. But the inefficiencies grow exponentially when you try to scale integration across multiple enterprises, let alone across an entire industry. The workflow simply does not scale.

At IMO we’re building a next-generation normalization engine called IMO Precision Normalize to improve ELT and ETL cycles – making them dramatically better, faster, and cheaper than anything data engineers and analysts have ever had.

However, there’s only so much blood IMO – or anyone – can squeeze from optimizing the ETL or ELT stone.

With IMO Precision Normalize, we are cutting the time data analysts spend on ELT and ETL and improving the output. For example, we’ve normalized over 500 million disparate terms in the last two months with no human intervention. But that’s still not good enough. We have to move upstream, to the left side of this diagram, and standardize the foundational terminology that every healthcare system uses to describe a patient, their treatment, and their outcomes. With our IMO Core product line, we can fix the left side of this diagram, too.

Fast Healthcare Interoperability Resources (FHIR) and the Observational Medical Outcomes Partnership (OMOP) are standard data models. But what’s more important are the terms we put in those models.

Generative Large Language Models (LLMs) won’t solve this. Healthcare’s language models are the classic case for fine-tuned edge models – containing their own language and dialects that are not well-represented in the corpora of LLMs. At best, LLMs will generate variations of what we’ve already been doing, but what we’ve been doing is not working.

Learn more about how IMO Precision Normalize standardizes data here.

Editor’s note: The supporting image to this article was created by IMO Chief Strategy Officer Dale Sanders.

Ideas are meant for sharing.

Sign up today and have Ideas delivered straight to your inbox.

Related Ideas