Tech

This Agency Wants To Find Out Exactly How Much Trust In AI

Loading...

Harvard University assistant professor Himabindu Lakkaraju studies the role of trust in human decision making in professional settings. She works with nearly 200 doctors in hospitals in Massachusetts to understand how confidence in AI can change how doctors diagnose a patient.

For common diseases such as the flu, AI is not very helpful, since human professionals can recognize them quite easily. But Lakkaraju found that AI can help doctors diagnose diseases difficult to identify as autoimmune diseases. In their latest work, Lakkaraju and his colleagues gave doctors records of about 2,000 patients and predictions from an AI system, after which they were asked to predict whether the patient would have a stroke in six months. They varied the information provided about the AI ​​system, including its accuracy, confidence interval, and an explanation of how the system works. They found that the doctors ’predictions were the most accurate when they gave the most information about the AI ​​system.

Lakkaraju says he is happy to see that NIST is trying to quantify trust, but says the agency should consider the role that explanations can play in the human trust of AI systems. In the experiment, the accuracy of stroke predictions by doctors decreased when doctors were given an explanation without data to inform the decision, implying that a single explanation could lead people to rely too much on AI.

“Explanations can lead to unusually high confidence even when it’s not justified, which is a recipe for problems,” he says. “But once you start putting numbers on how good the explanation is, then people’s confidence slowly calibrates.”

Other nations are also trying to address the issue of trust in AI. The United States is among 40 countries that have signed up AI principles which can emphasize trust. A document signed by a dozen European countries says that trust and innovation go hand in hand, and can be considered “Two sides of the same coin.”

NIST and the OECD, a group of 38 countries with advanced economies, are working on tools to designate AI systems as high or low risk. The Canadian government has created one evaluation of the impact of the algorithm process in 2019 for businesses and government agencies. Here, AI falls into four categories — from no impact on people’s lives or the rights of communities to very high risk and perpetuating harm to individuals and communities. The evaluation of an algorithm takes about 30 minutes. The Canadian approach requires developers to warn users about everything except systems at minimum risk.

European Union legislators have considered it AI regulations which could help define global standards for the type of AI that is considered low or high risk and how to regulate the technology. As Europe’s benchmark GDPR privacy law, the EU’s AI strategy could lead the world’s largest companies deploying artificial intelligence to change their practices around the world.

The regulation requires the creation of a public register of high-risk forms of AI in use in a database managed by the European Commission. Examples of high-risk retraining AI included in the paper include AI used for education, employment, or as safety components for utilities such as electricity, gas, or water. This report will likely be amended before the passage, but the bill calls for a ban on AI for the social scoring of citizens by governments and real-time facial recognition.

The EU report also encourages companies and researchers to experiment in areas called “sand basins”, designed to ensure that the legal framework is “innovation-friendly, future-proof and weather-resistant”. to disturbance “. Earlier this month, the Biden administration introduced the National Artificial Intelligence Research Resource Task Force intended to share government data for research on topics such as health care or autonomous guidance. The final plans would require approval by Congress.

For now, the point of trust of AI users is developed for AI practitioners. However, over time, scores could enable individuals to avoid unreliable AI and push the market toward the distribution of robust, tested, trusted systems. Of course it is if they know that AI is used by everyone.


More Great Stories WIRED


Source link

Loading...

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button