Digital Health Leaders Work To Stop AI Consuming Bias

“Think about the problems with stocks from the beginning,” Lin said. “If you think about it, when the technology is fully implemented … it is often very difficult to go back and tweak something.”
One of the principles that Lin said he follows in order to eliminate potential bias at the start of a project is to ensure that different stakeholders influence the design of the AI tool as well as how it is deployed. This means including developers from different backgrounds, as well as the perspectives of those who will be impacted by AI adoption, such as doctors and patients.
HEA3RT recently worked on a project to test an artificial intelligence chatbot that could collect a patient’s medical history before an appointment.
According to Lin, while some patients responded well to the chatbot, others said they would not feel comfortable handing over sensitive health data to the machine. Generally, younger and healthier patients tend to feel more comfortable with a chatbot than older patients who have had multiple or more complex chronic conditions, he added.
If such a chatbot were available to patients, it would also be important to make sure it can interact with patients who do not speak English.
To ensure that ethical considerations such as fairness are taken into account from the start, the Mount Sinai Health System in New York is creating an AI ethics framework under the guidance of bioethics experts. Bioethics have studied health imbalances and bias for decades, said Thomas Fuchs, dean of the AI and Human Health Division at Mount Sinai Icahn School of Medicine.
The framework will use the WHO Ethics and Governance Report as a basis.
“AI brings new challenges,” Fuchs said. “But very often it also falls into categories that have already been addressed in previous approaches to ethics in medicine.”
Determine the correct outcome to predict
Independence Blue Cross, a Philadelphia-based insurance company, develops most of its AI tools in-house, so it’s important to understand the potential for bias from start to finish, ”said Aaron Smith-McLullen, payer data science and health director. analytics.
Since 2019, Independence Blue Cross has partnered with the Center for Applied Artificial Intelligence at the University of Chicago Booth School of Business. The center provides free feedback and support to healthcare providers, payers, and technology companies that are interested in testing specific algorithms or tuning processes to identify and mitigate algorithmic bias.
Working with the Center for Applied Artificial Intelligence helped data scientists at Independence Blue Cross systematize their views on bias and where to add checks and balances, such as keeping track of what types of patients the algorithm tends to flag, and whether it matches what is expected. as well as the possible consequences of a false positive or negative result.
As developers go through the stages of creating an algorithm, it is important to constantly ask: “Why are we doing this?” Smith-McLallen said. This answer should tell you what results the algorithm predicts.
Many of the algorithms used by members of the Blue Cross of Independence can benefit from outreach or health care management. To achieve this result, algorithms predict which members are at risk for ill health.
This was an important lesson that the Center for Applied Artificial Intelligence drew from working with healthcare organizations: the need to carefully consider what results the algorithm predicts.
Algorithms that use proxies or variables that approximate other results to arrive at their conclusions run a high risk of inadvertently adding bias, according to Dr. Ziad Obermeier, assistant professor of health policy and management at the University of California, Berkeley. health research and artificial intelligence
Center for Applied Artificial Intelligence.
The center launched in 2019 after staff, including Obermeyer, published a study that found a widely used population health management algorithm – a predictive model that does not use AI – grossly underestimates the health needs of the sickest black patients. and gave healthier white patients the same risk score as black patients who had worse laboratory results.
The algorithm flagged patients who might benefit from additional care management services, but instead of predicting the health status of patients in the future, it predicted how much patients would cost to the hospital. This created a disparity, as black patients tend to benefit from health care services at lower prices than white patients.
Developers need to be “very, very careful and deliberate in choosing the exact variable that they predict with the algorithm,” Obermeier said.
It is not always possible to predict exactly what an organization wants, especially with complex problems such as medical care. But keeping track of the information an organization would ideally want from the algorithm about what the algorithm actually does and then how the two compare can help ensure that the algorithm is consistent with “strategic goal,” if not the exact variable.
Another common problem is not understanding the various root causes that contribute to the predicted outcome.
As an example, Obermeyer said there are many algorithms that predict “no-show” in primary health care that staff can use to make an appointment twice. But while some of these patients are likely to voluntarily fail to show up and cancel appointments due to the disappearance of symptoms, others are patients who find it difficult to get to the clinic due to lack of transportation or lack of free time at work.
“When the algorithm just predicts who won’t show up, it confuses these two things,” Obermeier said.
If the healthcare system has an artificial intelligence tool, even proven and accurate, the work is not done there. Leaders need to think critically about how to actually use the grooming tool and leverage the insights that AI is gaining.
For example, for an algorithm that predicts no-show, developers can create a way to detect voluntary and involuntary no-show and handle the two situations differently.
Source link