Health

AI in healthcare requires guidelines to protect patients

Several studies over the past few years have identified potential problems with the use of AI in healthcare settings.

A 2019 analysis published in the journal Science found that a commercial algorithm from Optum, used by the healthcare system to select patients for a care management program, assigns less healthy black patients the same risk level as white patients, meaning that black patients are less likely to will be identified as needy. Additional care.

An Optum spokesperson said in a statement that the algorithm is not racially biased and that the researchers mischaracterized the cost prediction algorithm based on misuse of the tool by one healthcare system.

“The algorithm is designed to predict future costs that individual patients may incur based on past healthcare experiences and does not result in racial bias when used for this purpose, a fact the study authors agreed on,” the spokesperson said.

In 2021, researchers at the University of Michigan School of Medicine published a peer-reviewed study that found that a widely used sepsis prediction model from electronic health record giant Epic Systems failed to identify 67% of people who had sepsis. It also increased the number of sepsis alerts by 43%, although the total number of patients in the hospital dropped by 35% in the early days of the pandemic. Epic did not provide an interview with the team that worked on the AI ​​sepsis model.

The White House Office of Science and Technology included both cases, without naming the companies, in a report accompanying its AI Bill of Rights draft, intended as guidance for several industries.

While there is no enforcement mechanism in this framework, it includes five rights that the public should have: algorithms must be safe and efficient, non-discriminatory, fully transparent, protect the privacy of those affected, and allow for alternatives, exits, and feedback.

Jeff Cutler, chief commercial officer of Ada Health, a healthcare AI company that offers symptom screening to patients, said his organization follows five principles when developing and deploying algorithms.

“It’s really important that the industry takes the Bill of Rights very seriously,” Cutler said. “It is important that users and businesses using these platforms ask the right questions about clinical efficacy, accuracy, quality and safety. And it is important that we are transparent with users.”

But experts say real regulation is needed to make a difference. While the Food and Drug Administration is tasked with overseeing software as a medical device, including AI, experts say the agency is having a hard time responding to the growing number of algorithms being developed for clinical use. Congress could intervene to define AI in healthcare and set mandatory standards for healthcare systems, developers, and users.

“There needs to be enforcement and oversight to ensure that algorithms are designed with discrimination, bias and privacy in mind,” said Linda Malek, chair of the health practice at law firm Moses & Singer.

Dr. John Halamka, president of the Mayo Clinic Platform, a Rochester, Minnesota-based healthcare portfolio focused on integrating new technologies, including AI, into healthcare, said more policies could be coming.

The Office of the National Coordinator is expected to coordinate much of the regulatory guidance from various government agencies, including the FDA, the Centers for Disease Control and Prevention, the National Institutes of Health and other non-HHS federal agencies, said Halamka, who has advised ONC and the federal government on numerous health technology initiatives but are not directly involved in surveillance.

Halamka expects significant regulatory and sub-regulatory guidance over the next two years.

Download the Modern Healthcare app to keep up to date with industry news.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button