Dr. Michael Howell, Chief Clinical Officer Google, I sat down. Mobi Health News We discuss notable events in 2023, the evolution of the company’s LLM for healthcare, called Med-PaLM, and recommendations for regulators in developing rules for the use of artificial intelligence in this field.
Mobi Health News: What are the key takeaways for 2023?
Dr. Michael Howell: For us, there are three things we would like to highlight. The first is a global focus on health. One of the things that sets Google apart is that we offer a large number of products that are used by more than 2 billion people every month, so we need a truly global mindset. And we saw it really come out this year.
Earlier this year, we signed a formal cooperation agreement with the World Health Organization, with whom we have collaborated for many years. We are focused on global health information quality and bridging the digital divide around the world with tools like the Android Open Health Stack. We saw that with things like Android Health Connect, where we have a lot of partnerships in Japan. Google Cloud focuses on health, partnering with Apollo Hospitals in India and the government of El Salvador. So number one is a truly global focus for us.
Second, this year we’ve put a lot of effort into improving the quality of health information, reducing misinformation, and combating misinformation. We have done this in collaboration with organizations such as the National Academy of Medicine and the Medical Specialty Societies. This year, we have seen particularly strong results. Especially on YouTube, where it can be seen by doctors, nurses, licensed mental health professionals, and the billions of people who watch health videos every year. Why sources can be trusted in a very transparent manner. Additionally, we have products that elevate your information to the highest quality.
And third, the 2023 list would not be complete without AI. It’s hard to believe that we published this paper less than a year before he did. First Med-PaLM paper, Medically Adjusted LLM. And perhaps what we’ve seen so far, and that’s his big takeaway from 2023, is the pace here.
We’re focusing on the consumer side, like Google Bard and search generation experiences. These products were not launched in his early 2023 and are currently sold in over 100 countries each.
MHN: It’s amazing that Med-PaLM has been on the market for less than a year. When first released, the accuracy range was around 60%. After a few months, his accuracy was over 85%. The final reported accuracy was 92.6%. Do you expect Med-PaLM and AI to make waves in the medical field in 2024?
Dr. Howell: Well, the unanswered question going into 2023 was: Will AI become a science project or will people use it? And what we’ve seen is that people are using it. I have seen HCA [HCA Healthcare] and hackensack [Hackensack Meridian Health]And all of these really important partners actually start using it at work.
And what you revealed about how quickly things are improving was part of that story. Med-PaLM is a great example. People have been working on that problem set for many years, and at a time he has improved by 3, 4, or 5%. Med-PaLM soon he turned 67 years old, then 86 years old [percent accurate].
And the other thing we announced in August is the addition of multimodal AI. So how do you have a conversation looking at a chest x-ray? I don’t understand either…That’s on a different level, isn’t it? So I think we’ll continue to see that kind of progress.
MHN: How can I have a conversation after looking at a chest X-ray?
Dr. Howell: So, actually, I’m a respiratory and critical care physician. I practiced for years. In the real world, you would call a radiologist and ask, “Does this chest x-ray look like pulmonary edema?” And they say, “Yeah.” “Is it two-way or one-sided?” “Both sides.” “How bad?” “It’s not that bad.” It’s about being able to find a way to fuse them together in a way that incorporates all the language features into these things that we specialize in.
So, in reality, medicine is a team sport. It turns out that AI is also a team sport. Imagine being able to look at a chest x-ray and ask questions using a chat interface to the chest x-ray. This will give you an answer as to whether you have a pneumothorax. Pneumothorax is a term that means a collapsed lung. “Is there a pneumothorax here?” “Yeah.” “Where?” All of them. It’s quite an amazing technical achievement. Our team has done a lot of research, especially in pathology. We found that clinician-AI teams outperformed clinicians and outperformed AI because each team excels in different areas. There’s good science about it.
MHN: What was the biggest surprise or most noteworthy thing that happened in 2023?
Dr. Howell: There are two things to watch about AI in 2023. The first is the speed at which AI is improving. I’ve never seen anything like this in my career, and I don’t think most of my colleagues have either. That’s the best.
Second, there is a lot of interest from clinicians and health systems. they are moving very fast. One of the most important things about brand new, potentially revolutionary technology is experiencing it first hand. Because you can’t understand it until you actually hold it in your hands and touch it. And the biggest pleasant surprise for me in 2023 is how quickly it’s happened, with real health systems grappling with it and working on it.
Our team had to work at incredible speed to ensure we could do this safely and responsibly. we have finished the job. That, and the early pilot projects and early work done in 2023, will get us ready for 2024.
MHN: A number of committees are beginning to form to create regulations regarding AI. What advice or suggestions would you give to the regulators who are setting these rules?
Dr. Howell: The first is that we think AI is too important to be properly regulated or regulated. We think so, and although it may be counterintuitive, we believe that proper regulation here will accelerate innovation, not set it back.
However, there are also some risks. The risk is that if we end up with a patchwork of regulations that vary meaningfully from state to state or country to country, innovation is likely to be set back. So when I think about the U.S. regulatory approach, I’m not an expert in regulatory design, but I’ve talked to a lot of people on our team, and what they say really makes sense to me. That means you need to think about spoke models.
So what I’m saying is that groups like NIST, [National Institute of Standards and Technology] Set a holistic approach for trustworthy AI, define development standards, and apply them to domain-specific areas. So, like H.H.S. [Department of Health and Human Services] or FDA [U.S. Food and Drug Administration] Adapting to health.
The reason it makes sense to me is because I know that we as consumers or people don’t live our lives in just one area. And we always see health and retail being the same thing, or part of health and transportation. We know that social determinants of health largely determine our health outcomes. Therefore, regulation would be hampered if those social determinants had different regulatory frameworks. But for companies like ours, companies that really want to color inside the lines, regulation helps.
The last thing I’d like to say is that we’ve been actively engaged and engaged in dialogue with groups like the National Academy of Medicine, which has a number of committees working on developing codes of conduct for AI in healthcare. . And we’re grateful to be a part of that conversation as it unfolds.
MHN: Do you think there needs to be transparency about how AI is developed? Should regulators have a say in what is included in the LLM that constitutes an AI product?
Dr. Howell: There are some important principles here. Therefore, medicine is already a highly regulated field. One of the things we think about is we don’t have to start from scratch here.
So things like HIPAA have really stood the test of time in many ways. In the case of HIPAA, given the framework that we exist in, that we operate within, that we know how to operate within, and that has protected Americans, it’s a huge accomplishment. It makes sense instead of trying to reinvent the wheel where you already know what works.
We think it’s very important to be transparent about what AI can do, what it’s good at and what it’s bad at. There are a lot of technical complexities. Transparency can mean many things, but one thing we know is that we need to understand whether the operation of our AI systems is fair and whether it promotes health equity. is extremely important. This is an area that we are deeply invested in and have been considering for years.
Here are two examples and two pieces of evidence for that. More than five years ago, in 2018, Google AI principlesthunder [Sundar Pichai, Google’s CEO] That was the signature line. To be honest, in 2018 a lot of people asked me, “Why would you do that?” That’s because Trans Architecture was invented at Google, and we could see what was going to happen, so it had to be deeply rooted in principles.
We also took the unusual step for a large technology company in 2018 by publishing a significant paper. peer-reviewed journal, a paper on machine learning and its opportunities to advance health equity. We’ve continued to invest in that, especially by hiring people like Ivor Horne, who is now leading our work in health equity. Therefore, we believe that these are very important areas going forward.
MHN: One of the biggest concerns for many is the potential for AI to worsen health equity.
Dr. Howell: yes. There’s a lot of different ways it could happen, but that’s one of the things we’re focusing on. There’s something really important about reducing data bias. AI also has the potential to improve fairness. We know that there is a lack of equity in the delivery of care today. It is full of inequality. We know that to be true in the United States. That’s true worldwide. And the ability to improve access to expertise and democratize expertise is one of our main focuses.
The HIMSS AI in Healthcare Forum will be held on December 14th and 15th, 2023 in San Diego, California. Details and registration.