HLTH 2023: Update on Generative AI in Healthcare

 

Generative AI - a subset of AI technologies that employ advanced machine learning algorithms to generate content, solutions, or outcomes that weren't explicitly programmed into the algorithms has many potential applications in healthcare. But at the moment, the digital health space is filled with noise.

This discussion was recorded at HLTH 2023 in Las Vegas. Justin Norden - Partner at GSR Ventures talks about:

  • his observes from the investor’s perspective,

  • what he thinks about the discussions on open-sourced vs. closed AI development,

  • why everyone should incorporate generative AI in healthcare and more.

Here is the full transcript:

For starters: what are your thoughts on generative AI in the health sector in October 2023? What's new for you, and what's the key takeaway on this topic?

Generative AI in healthcare has been a longstanding topic. My prior experience with a startup and teaching at Stanford has allowed me to witness the evolution of this conversation since our last discussion. Over the past year, I've observed a shift from initial excitement about ChatGPT's potential to revolutionize healthcare to a recognition of the need for more practical applications.

We've seen a surge of startups developing specific solutions, with some students even showcasing incredible achievements in just a weekend. The pace of development has increased significantly, possibly by a factor of a hundred compared to before.

However, an important question arises: how can we ensure the effectiveness of these solutions? While they may seem to work, they are not perfect, and perfection may be unattainable. Consequently, discussions are evolving towards establishing guardrails for testing and verifying these solutions to ensure they fulfill their intended purposes safely. This includes comparing the performance of different solutions.

The generative AI landscape has grown noisier, with increased excitement and interest. More people want to engage in discussions, and there are numerous areas to explore, from ambient documentation solutions to startup pitch competitions. Questions regarding sustainable business models, impending regulations, and practical implementation have also emerged.

I always advise people not to rely on one person to figure everything out. Leaders, both personally and professionally, should start incorporating generative AI into their daily lives and businesses. It's a new way of thinking, and understanding its capabilities is crucial.

What advice do you give to startups, and what approaches have you observed regarding testing their solutions?

There's an interesting trend in testing emerging on both national and local levels, with assurance labs and various institutions exploring different testing methods.

It's essential to acknowledge that there hasn't been enough emphasis on validating digital health technologies, and some companies have faced consequences for overpromising. This correction will also extend to AI, with a focus on what has been validated through clinical trials and publications.

As for advice to startups, while many now feel compelled to include generative AI in their pitch decks, not all startups need to develop their own AI methods. They can leverage existing solutions off the shelf and consider how these technologies can address their customers' pain points, whether or not they are directly related to generative AI.

I encourage startups to cultivate curiosity about technology capabilities, align these capabilities with customer needs, and explore how generative AI can be integrated into existing solutions, both externally and internally. Staying informed about developments in the field is crucial, whether through personal research or by assigning someone on their team to do so.

In summary, while not every startup needs to be solely focused on generative AI, nearly all of them should consider incorporating it to some degree into their operations and offerings.
Two key questions for me from this conference around this topic are that basically the questions that startups need to have an answer to is what's your specific data library that you're using to basically fine-tune the model, which we mentioned briefly, and also what's your hallucination rate.

I wonder two things. What are, in your opinion, some additional questions that maybe startups would need to have an answer to? And also maybe a brief comment around that hallucination rate. It's an obvious question, but how do you get an answer to it?

Right now, it's easier to point out questions than it is that there's a perfect validated method to understand exactly what your hallucination rate is. On the first topic of fine-tuning and the data source, I think that has been a key question. That's not actually different today than before for AI/ML startups. It's about where you are building or training your model, where the data originates, and whether you have a feedback loop with that data. Some companies are even using reinforcement learning with human feedback, involving physicians to label data for creating new datasets to improve model predictions iteratively.

What's interesting today is that these general models are much better, making it easier for startups to begin without necessarily requiring their proprietary or specialized fine-tuning datasets. While certain use cases still demand this, others don't, which is a significant shift. It's also an important factor in due diligence when talking to investors, as it may determine whether a startup has a data advantage or not.

Additionally, some companies are exploring partnerships to create entirely new foundation models. However, it remains to be seen whether this approach will be more effective than using existing models. GPT-4, a general model, is still outperforming specialized models in many benchmarks, so the optimal avenue for model development and tuning is still uncertain.

Tune in or listen to a longer discussion with Justin Norden in July 2023.

What kind of discussions do you have with healthcare executives and providers, especially regarding clinical use cases? We previously discussed hesitancy in this area, with 75 percent of healthcare system executives agreeing that generative AI is at a pivotal point. What kind of discussions are you observing?

The landscape has evolved, and now every healthcare executive is being asked about their generative AI strategy. The answers vary significantly. Some health systems have already deployed use cases beyond what Epic, a healthcare software company, has done. Some are testing use cases with Epic, while others are partnering with startups. However, there are also those doing nothing and even blocking ChatGPT and generative AI altogether. The field is evolving rapidly, and everyone is scrambling to determine the right course of action.

One common theme from my discussions is the need for validation and testing. Nobody I've spoken to has a definitive solution in this regard. This presents an opportunity to shape the future of generative AI in healthcare.

A recent Bain survey revealed that only 6 percent of respondents had a detailed strategy for generative AI. Did that surprise you?

No, it didn't surprise me at all. Technology in healthcare is advancing at an unprecedented pace. The weak points, security vulnerabilities, and ways to trick AI models are constantly changing. For example, some internal open AI models have vulnerabilities that can be exploited when asked questions in esoteric languages like Zulu. The debate between open source and closed source models also adds complexity. Even those with a generative AI strategy acknowledge that it's a constantly evolving field that requires ongoing evaluation.

What's your opinion about open versus closed source models in healthcare?

I believe there will be a push for more openness in healthcare models. Open models offer transparency, making it easier for third parties to test and validate them. However, there are downsides, as open models can expose more vulnerabilities and ways to exploit them. So, while I anticipate that open source models will eventually prevail in healthcare, it's still early days, and the debate continues.

Is there anything else that you would like to add in terms of what you read about or saw in the last two months or maybe just a comment? You provided an overview of the generative AI landscape with over 120 companies. And now, here at hlth, I'm sure that you also went around to see demos that you haven't seen before. Any new impressions?

I'd say over the past couple of months, there's definitely more companies that we didn't include yet in the report. In some ways, it already feels outdated as there are new names we should be throwing in. On the other hand, many are still following in the same buckets as before. We're seeing more RCM companies come in, more ambient documentation, and more people starting to think about data and analytics, among other things. Lots of them fit into similar buckets. We're still seeing lots of the same demos, perhaps a new patient engagement solution, and so on. We're still being patient at GSR Ventures around what are the truly transformative opportunities.

What is the transformative opportunity? Especially if we look back at prior examples like the internet, Google wasn't started until much later. So that's always the fear, you know, when is the right transformative opportunity going to come and trying to make sure at GSR where we're a part of it.

When you say you're being patient, does that mean that you are still waiting before investing?

We are actively investing, but we aren't throwing money around recklessly, waiting for the right opportunity, especially to deploy even larger checks.

Any last thoughts, anything that we didn't mention?

I think we talked about some of the pros and cons and caveats of generative AI. Just zooming all the way out, this is the most transformative piece of technology. It's taking up essentially a hundred percent of my time, and I think with these tools, we will, the question is how fast, but we will change healthcare. So, I'm very optimistic.