Generative AI and ChatGPT: Healthcare’s new wave

Experimenting with NLP in healthcare, specifically generative AI tools, has its fair share of risks and rewards. IMO’s new white paper explains.
NLP in healthcare

The healthcare and biopharmaceutical sectors are on the brink of a transformation, driven by the advent of generative artificial intelligence (AI) models like ChatGPT.  

But what exactly could that transformation look like?  

In our recent webinar, ChatGPT and Generative AI: Paradigm Shift in Healthcare and Biopharma Innovations, industry experts explored the practical applications of ChatGPT in healthcare and how AI can be used for improved data analysis and decision making. That session was so well received that we decided to expand it into a white paper.  

For a closer look at the new opportunities posed by AI technologies like NLP in healthcare and a look at the challenges that may arise with their increasing integration into the health IT ecosystem, check out our latest white paper, ChatGPT and generative AI in healthcare. (Not quite ready to make that six-page commitment? No worries. Keep scrolling for an excerpt below.) 

ChatGPT and the NLP development cycle

ChatGPT and generative AI dramatically change the approach to NLP. Historically, different methods were required for different NLP tasks, such as name entity recognition, sentiment analysis, and text summarization. And each NLP task required specific training data, testing data, validation data, and the creation of different models. With ChatGPT, one model fits all. 

In addition, ChatGPT reduces the time-consuming and labor-intensive NLP step of fine tuning. A typical NLP task starts with an information model, which defines the elements and what one wants to extract from the textual data. The steps to develop the model include data annotation, training of the machine learning deep learning algorithm, and post processing to add rule dictionaries on top of the machine learning results. Algorithm performance is then evaluated by a domain expert who checks the results.

Fine-tuning is an iterative process, repeating each of these steps several times until satisfying performance and results are achieved.

Other NLP practices shorten the development cycle with varying impacts on the results. ChatGPT offers different levels of “shot learning,” which dictate the number of examples provided to train the model. With few-shot learning one can give a model two to three examples, far fewer examples than with the iterative process of fine tuning. With one-shot learning the model is provided with one example. And with zero-shot learning one simply provides a plain English description of the task, without any examples, and the model will execute the prediction automatically. These approaches are faster and make reasonably good predictions, but typically can’t match the performance achieved with fine tuning.

NLP practitioners always strive for few-shot or zero-shot learning because it can dramatically shorten the NLP development work cycle. However, for medical writing, information retrieval, and healthcare NLP applications that involve a high degree of clinical and biomedical nuance, the process of fine tuning and the additional effort for human intervention and expertise are necessary to ensure the process is steered correctly. 

To learn more about ChatGPT, NLP, and their evolving role in healthcare, download the white paper.

Ideas are meant for sharing.

Sign up today and have Ideas delivered straight to your inbox.

Related Ideas