Dennis Chornenky The chief artificial intelligence advisor at UC Davis Health is helping health systems develop frameworks for incorporating advanced technology into medical practices.
He previously served as a 2020 Presidential Innovation Fellow, advising the White House on AI efforts.
Chornenky takes a three-pronged approach to assessing the type of artificial intelligence health systems should invest in. That means outlining your narrative goals, understanding your technical risks, and assessing your long-tail costs.
He believes health systems should take the lead on how AI should be regulated in healthcare.
In a recent interview with Ruth, Chornenke spoke about some of the challenges facing the healthcare industry as AI becomes mainstream, and noted that his views represent his own and do not necessarily reflect those of the University of California, Davis. He emphasized that he was not representing them.
This interview has been edited for length and clarity.
How do you choose an AI project?
We encourage submission of ideas from various sectors within the health system. Many of them have their own AI strategies and implementation roadmaps in mind, and we coordinate that work to ensure that high-priority areas and high-potential use cases surface for more senior review. We strive to make it possible and direct resources and implementation. Prioritization.
How do you think AI should be regulated?
Technology is evolving so rapidly that it can be extremely difficult for policymakers, lawmakers, and regulators to keep up, especially in complex industries that are already highly regulated like healthcare. It therefore makes sense to consider evolving the self-regulatory structures of complex industries such as healthcare and finance when it comes to advanced technologies. We have seen this historically in the financial sector. [the Financial Industry Regulatory Authority].
The Coalition for Health AI, an industry group you’ve worked with in the past, has floated the idea of using public-private assurance labs to certify and monitor AI in healthcare. What do you think?
These AI Assurance Labs are a step toward that end. [self-regulatory] direction. This is basically something that healthcare organizations can use their own expertise to start setting standards. And as long as the FDA and perhaps other elements of the federal government may be involved, that’s great. Because over time, it becomes more and more codified.
But that’s not due to laws that are well-intentioned but likely to miss the mark simply because of gaps in understanding and expertise, and what’s actually happening with technology, which is happening in the medical field. It will be promoted from this point.
What are your current concerns about AI in healthcare?
This is a very difficult situation for the health system right now. Because the burden of governance on health systems is enormous and growing rapidly. Most AI technology developers don’t think about what are the 100 things they should check to ensure this application is appropriate for any healthcare environment. They’re just thinking, “We’ve built a model that can identify this disease very well within this population,” and things like that.
But I hope that over time, the industry will evolve further towards having these standards in place beforehand and knowing what needs to be done from the beginning.
Here, we explore the ideas and innovators shaping healthcare.
Oregon’s first humans It was probably the first plague infection in over eight years. communicated to the patient According to NBC News, it was caused by a domestic cat.
Share your thoughts, news, tips and feedback with Carmen Paun. [email protected]Daniel Payne [email protected]Ruth Reeder [email protected] or Erin Shoemaker [email protected].
Send your tip securely Through SecureDrop, Signal, Telegram, or WhatsApp.
Sometimes humans Exposure to medicine isn’t enough — at least that’s what AI chatbot developers have discovered. new research in natural medicine.
Why? Using chatbots to refer patients to mental health services has led to more people with common mental health disorders seeking talk therapy compared to other self-referral methods. The makers of the chatbot, psychotherapy company Limbic AI, say it has seen an increase in referrals, especially among people of color, who often face barriers to care.
Researchers analyzed data from 129,400 patients in the UK and found that when AI bots were offered for self-introduction, they were less likely to use traditional self-introduction channels, such as completing online forms, They found that referrals to the service increased significantly faster (15%) than the services they used. Call a local therapist’s office (6% increase).
This increase was especially noticeable in some demographics, where self-referrals increased by more than 30%.
why? The AI chatbot developer emphasized the importance of keeping human clinicians in the loop and listed several possible reasons.
Self-reference tools may avoid expected judgment from clinicians and provide a less intrusive way to discuss sensitive topics for some patients, the authors write.
Become Sean Mooney of Next leader of the Information Technology Center At the National Institutes of Health, you play a critical role supporting the IT infrastructure of biomedical research institutions.
As director, Mooney will oversee a $400 million portfolio that includes supercomputers for large-scale data analysis, a network that connects NIH researchers around the world, and cloud-based services that support NIH’s databases and computational tools. will be in charge of.
Who is he? Mooney is a professor of biomedical informatics and medical education at the University of Washington School of Medicine.
His research interests include using computing cyberinfrastructure and data science to advance discoveries in biomedical research.
“Dr. Mooney has spent his career developing effective, collaborative computing systems to support biomedical research,” NIH Director Monica Bertagnoli said in a statement.
Prior to joining UC, Mooney worked at the Buck Institute on Aging and was an assistant professor of medicine and molecular genetics at Indiana University School of Medicine.
What’s next? Mooney will succeed Ivor D’Souza, who has served as acting director of the center since September 2022. Mooney will begin his new role in mid-March.