Brian Anderson, executive director of the Coalition for Health AI.
Health AI Alliance
as Eric Horvitz, Microsoft’s chief scientific officer, spends a lot of time thinking about how to balance the risks and benefits of new technologies. Explosive advances in artificial intelligence have made this even more essential, especially in life-or-death fields like medicine. “The stakes are very high,” said Horwitz, who is also a physician. forbes. “We are combining incredible excitement with mature vigilance.”
That’s why Microsoft has joined a nonprofit supergroup of private companies and public organizations dedicated to figuring out how to bring AI to healthcare in ways that benefit and protect patients.
AI presents a huge opportunity to help patients receive better care, faster, but AI models may not be trained on patients of different genders, ages, races, or ethnicities There are also significant risks involved in making things up or suggesting incorrect answers. The problem is that the current regulatory structure is fragmented. Electronic health record companies follow different rules than medical device manufacturers. AI tools developed in-house by hospitals are in a gray area, and insurance companies are under the jurisdiction of yet another agency.
But one thing that all the various stakeholders agree on is that we need better structures for standardizing, testing, validating, and tracking the use of AI in healthcare. That’s where the Coalition for Health AI (abbreviated as CHAI) comes in. .
On Monday, the newly formed nonprofit announced a board of directors that includes Horwitz and representatives from hospitals, startups, academia, venture capital, patient groups and the federal government. “We can’t overregulate or stifle innovation in any way. We also need to protect patients and keep doctors informed,” Horwitz said. “CHAI supports a balanced path forward.”
CHAI’s goal is to become a kind of standards body that certifies health AI tools. In the first year, the group plans to begin establishing standards, testing metrics, a network of health AI assurance labs, and a national registry of validated health AI tools.
“If we’re going to have trust in AI, we need transparency,” said John Halamka, chairman of CHAI and president of the Mayo Clinic Platform. forbes. “You need the concept of reproducibility. You’ll get a good answer today and you’ll get a good answer tomorrow. It has to be consistent.”
The group, which started as an all-volunteer organization in 2021, felt the urgency to incorporate in response to President Biden’s AI executive order in October 2023, which specified the importance of safety in medical applications. It was registered as a nonprofit membership organization known as a 501(c)6 in January 2024. Mayo Clinic, Stanford Health Care, Johns Hopkins and Duke Health split the initial legal costs, totaling about $100,000, Halamka said. forbes.
More than 1,300 organizations have joined CHAI so far in 2021, and the group plans to introduce a membership fees system starting this year. Halamka estimates the budget for the first year to be about $1 million, with the cost to be split 50-50 by about 20 founding member organizations.
Halamka said CHAI “doesn’t intend to be a lobbying organization.” Although the goal is for industry and government to work together to create best practices for testing and deploying health AI models, the framework CHAI establishes will ultimately be voluntary. Because federal regulations often take years to draft and finalize, CHAI essentially fills that void. “If we look to 2025, this could become more formalized and regulated,” Halamka said. “But I don’t think there will be a lot of regulation on this in 2024.”
“We need to have a consistent vocabulary of what constitutes responsible AI.”
CHAI also announced a partnership with the National Health Council and HL7, a national health information technology standards organization representing 160 million patients. Microsoft represents Big Tech, and Bessemer Vice President Morgan Cheatham represents venture capital and the startup ecosystem. There are two federal liaisons on the CHAI board. Troy Tazbaz, Director of the FDA’s Digital Health Center of Excellence, and Mickey Tripati, National Coordinator for Health Information Technology at the Department of Health and Human Services.
In an article published in December 2016, Tazbaz and Tripathi signed off on an idea to create a national network of health assurance labs and a national registry to test and validate health AI. Japan Automobile Manufacturers Association At the end of 2023.
“We need to have a consistent vocabulary of what constitutes responsible AI,” Tripathi said. forbes. He theorized about his two most extreme directions. The first is a world where the government regulates everything through a central database, and the second is a completely laissez-faire model where industry does whatever it wants. “The right point is somewhere in the middle. We want to make sure there’s a public-private agreement on this.”
To this end, CHAI also announced the creation of a government advisory committee of several Biden administration officials. That includes Jonathan Bloom, principal deputy administrator at the Centers for Medicare and Medicaid Services. Gil Alterowitz, his chief AI officer at the Veterans Health Administration; Susan Coller-Monales, Deputy Director, Health Advanced Research Projects Agency;
Brian Anderson, former chief digital health physician at Miter, a nonprofit company that consults with governments on research and development projects, will lead CHAI’s day-to-day operations as executive director. “We are intentionally trying to make this as inclusive an effort as possible,” Anderson said. forbes. “Anyone who wants to participate can post their voice. This is not a pay-to-play initiative.”
CHAI will establish a working group in late March to develop standards and testing and evaluation frameworks, and hopes to publish them by early fall. At the same time, the health system will establish an assurance lab in the third and fourth quarters. Some of these labs already exist, such as the Mayo Clinic, but each lab will be required to go through an accreditation process to obtain the equivalent of CHAI’s “Seal of Approval.” “I expect 30 or so health systems to be part of this broader network,” Anderson said. Participation is voluntary and CHAI has no mandate, but Anderson said he has already heard from many medical institutions interested in establishing these institutes.
The idea is for health AI developers to take their models to these labs to verify whether they are fair and accurate across different types of patients. The developers hope the model will be validated in several different health systems across the country, which could help ensure the model is not biased. Just because an AI model works for white patients at the Mayo Clinic in suburban Minnesota doesn’t mean it will work the same way for black patients at Duke Health in a rural part of North Carolina.
All information related to testing models in the CHAI lab is available in a public registry, allowing anyone to understand model performance, “all of which leads to increased reliability and trustworthiness of AI.” says Anderson.