Sandy
Sandy Author at Health Tech Bytes. Cloud Architect, data science and AI practitioner, health informatics strategist

Switchover disruptions: The true cost of an AI scribe

Switchover disruptions: The true cost of an AI scribe

“Switchover disruptions” were highlighted in Harvard Business Review’s recent article on the challenges of AI adoption in healthcare. Economists define it as the cost of integrating new technologies that can dampen an organization’s profits.

This prompted me to examine the ways in which this next wave of AI (Large Language Models/LLMs) can minimize such disruptions, and if this type of AI is worth the tradeoff.

The unsettling case of high switchover disruption: EHRs

I was fascinated by the article’s comparison of 2 new technologies that arrived in healthcare: EHRs (2009) vs. a new surgical technique to remove the gallbladder (1988). EHRs faced strong resistance, until the Obama administration stepped in to incentivize organizations to go digital. In contrast, the new (minimally invasive) surgical technique faced low switchover disruption - physician adoption was easier and limited to surgeons.
The key? Hospitals and surgeons were already in the business of doing the procedure.

EHRs should have been an easy win. Why wouldn’t a patient want to see all their medical information stored in one place? Why wouldn’t a physician want to see a more complete view of their patient’s health? In all other industries where digitization has led to improved efficiency, the complete opposite happened with EHRs. Perhaps we were not aligned on improving healthcare outcomes after all.

I agree that the issue came down to control. Not surprisingly (in a capitalist society), there are power struggles between physicians, payers, and government. But guess who loses out in the end? The patient.

Will the same fate await this next wave of AI?

A plausible use case: AI Medical Scribe

The arrival of ChatGPT has created a dizzying revival for using AI in healthcare, with the promise of automated diagnosis and treatments. This, by the way, were the same statements touted by IBM Watson a decade ago. Granted, today’s AI technology has improved over its predecessors, it doesn’t matter unless it can prove its business value. Martin Kohn, the former IBM Research chief medical scientist, stated:

“Prove to me that it will actually do something useful—that it will make my life better, and my patients’ lives better.”

I believe AI systems can do something useful - work as a AI medical scribe. Although this is less glamorous than diagnosing and treating patients, this is probably the best use case for AI today. Case in point: One of the ill by-products of digitization has been increased administrative burden placed upon healthcare providers. If technology can reduce this burden by summarizing patient information or answering patient questions efficiently, providers will have more face-to-face time with patients, and increase productivity. For example, JAMA reported that ChatGPT provided comparable quality and empathetic responses to patient questions found in an online forum (vs. physicians).

I think more physicians and health system leaders will be on board with these AI technologies, as efficiency is one key to improving organizational performance (i.e., reduced costs, better patient outcomes) and potential profits.

The true cost of running LLMs

While LLMs have demonstrated remarkable performance in terms of human comprehension, one must also consider the cost to run them. They are not cheap, due to the amount of data they can ingest and the compute power required to process the data. The Wall Street Journal ran a piece earlier this month on the struggles for tech giants to monetize these technologies. According to the article, Microsoft is using OpenAI’s latest version (GPT-4) software for its AI features. However, that version is the largest and most expensive model available.

Equally important is understanding the technology’s impact on the environment. According to this op-ed from Ars Technica, the most expensive and proprietary (“black box”) models are reserved for very deep pocketed organizations. As such, building and deploying these models “requires a lot of planetary resources: rare metals for manufacturing GPUs, water to cool huge data centers, energy to keep those data centers running 24/7 on a planetary scale… all of these are often overlooked in favor of focusing on the future potential of the resulting models.”

What’s the right ROI then? Would it behoove healthcare organizations to use smaller, open-source LLMs? A lack of privacy standards should be of upmost concern, and will likely push organizations to move towards proprietary LLMs. However, it might be overkill to use the likes of GPT-4 and beyond to answer patient questions. Perhaps the sweet spot could lie with patient summarization and triaging tasks. One must consider the tradeoff in terms of time/cost for leveraging human labor vs. a turbo-charged AI chatbot.

Conclusion

This article provided useful insights and recommendations to overcome the challenges of AI adoption in healthcare. The challenges were no different a decade ago with EHR adoption, and it was very interesting to understand the power dynamics that made adoption difficult. While there are no clear answers yet, these next-gen AI technologies have seen a bit more acceptance by the healthcare community (or at least it is talked about more frequently). However, I think it remains to be seen how far-reaching this technology will be, and whether or not the switchover disruption will be high or low.

References

comments powered by Disqus