You may remember a series of lawyers who tried to use AI tools in the courtroom. Then chatbots failed, sometimes creating plausible cases that didn’t actually exist, resulting in embarrassment and sanctions.
So let’s consider the following. How would you feel if your doctor did the same thing and fed your symptoms into an AI system to diagnose what’s wrong?
That’s a pressing question, politiko reports in an interesting article and is currently causing stress for regulators. And it has an amazing immediacy. politikoPhysicians reporting already in use Unregulated and largely untested AI tools to aid in patient diagnosis. So this is not a hypothetical story about the distant future, but a phenomenon that is already happening today, where one medical malpractice lawsuit is all it takes to get a lawsuit filed. Major medical and regulatory scandals.
“The wagon is so far ahead of the horse, how can we get the reins back without climbing up the ravine?” asked John Ayers, a public health researcher at the University of California, San Diego. . politiko.
The obvious answer is that this technology needs regulation. The concept has nominal buy-in from everyone from the White House to OpenAI.
The problem, of course, is that actually doing so is easier said than done.as politiko One key issue is that once most medical products (think drugs, surgical instruments, and other medical devices) are approved, they can generally be trusted to continue working the same way indefinitely.
This is not the case with AI models. AI models are in constant flux as their creators tweak the models and add more data. This means that an AI that gives a perfectly good diagnosis one day may give a bad diagnosis after routine changes. And remember, the core reality of machine learning systems is that even their creators have a hard time explaining exactly how they work.
government regulatory authorities such as the FDA; politiko He points out that the situation is already stretched thin to the breaking point. Asking them to create and maintain workflows to continuously test medical AI systems would require a politically impossible amount of funding. So if these AI systems are already beginning to permeate regular medical practice, who will monitor them?
One idea, the paper reported, is that medical schools and university health centers could set up labs to constantly audit the performance of AI healthcare tools.
But that idea also requires some hand-waving. Where will those resources come from? And do the interactions among patient populations in predominantly urban, affluent facilities accurately reflect the way AI works in different, more challenged communities? Is not it?
In the long run, AI could have incredible benefits for healthcare systems. Technology leaders certainly love to lean into that possibility. For example, OpenAI CEO Sam Altman publicly believes that future AI will be able to provide high-quality medical advice to people who cannot afford a doctor.
But now, AI’s messy inroads into healthcare systems are highlighting how uncomfortable certain realities of technology can be, even in literal life-or-death situations.
Learn more about AI: Scientists test AI-designed drugs on human patients