Don’t end up on this Artificial Intelligence Shame Room


When a person more in a car accident in the United States, data on the incident are generally reported to the National Highway Traffic Safety Administration. Federal law requires civil aircraft pilots to notify the National Transportation Safety Board of in-flight fires and any other incidents.

Sad records are intended to give authorities and producers better insight into ways to improve security. They helped inspire a crowdsourced repository of artificial intelligence incidents aimed at improving safety in much less regulated areas, e.g. autonomous vehicles and robotics. U AI Incident Database launched late in 2020 and now contains 100 incidents, including # 68, the security robot that fell into a fountain, and # 16, in which Google’s photo organizing service labeled Blacks as “gorillas”. Think of this as the AI ​​Hall of Shame.

The AI ​​incident database is hosted by Partnership on AI, a non-profit organization founded by large technology companies to research the disadvantages of technology. The role of dishonor was started by Sean McGregor, who works as a machine learning engineer at the Syntiant voice processor startup. He says it’s necessary because AI allows machines to intervene more directly in people’s lives, but the software engineering culture doesn’t promote security.

“I often talk to my fellow engineers and they come up with an idea that’s pretty smart, but you have to say‘ Have you thought about how to make a dystopia? Says McGregor. He hopes the incident database can function as a carrot and stick for technology companies, providing a form of public accountability that encourages companies to stay off the list, while helping engineering teams to develops AI deployments less likely to fail.

The database uses a broad definition of an AI incident as a “situation in which AI systems have caused, or nearly caused, damage in the real world.” The first entry in the database collects allegations that YouTube Kids displayed adult content, including sexually explicit language. Most recently, # 100, it is a problem in a French welfare system that can incorrectly determine that people owe state money. Meanwhile there are crashes of autonomous vehicles, as well Fatal Uber incident in 2018, and unjust arrests for failures of automatic translation o facial recognition.


Anyone can submit an article to the AI ​​disaster catalog. McGregor approves the addition for now and has a considerable backlog to process but eventually hopes the database will become self-sufficient and an open source project with its own community and healing process. One of his favorite incidents is an AI blooper from a facial recognition-powered jaywalking detection system in Ningbo, China, who mistakenly accused a woman of seeing her face appear in an advertisement next to a bus.

The 100 incidents recorded so far include 16 involving Google, more than any other company. Amazon has seven, and Microsoft has two. “We know the database and fully support the mission and goals of the collaboration in publishing the database,” Amazon said in a statement. “Gaining and maintaining our customers’ trust is our highest priority, and we design rigorous processes to continually improve our services and customer experiences. ”Google and Microsoft did not respond to comments.

The Georgetown Center for Security and Emerging Technology is trying to make the database more powerful. Entries are currently based on media reports, such as the incident 79, which he cites WIRED report on an algorithm to estimate renal function which, conceivably, the disease of black patients is less severe. Georgetown students are working to create a companion database that includes details of an incident, such as whether the damage was intentional or not, and whether the problem algorithm acted autonomously or with human input.

Helen Toner, director of strategy at CSET, says the exercise informs research on the potential risks of AI accidents. She also believes the database shows how it might be a good idea for lawmakers or regulators who abide by AI rules to consider sending a form of incident reporting, similar to that for aviation.

EU and US officials have shown a growing interest in AI regulation, but the technology is so varied and widely applied that the development of clear rules that will not quickly become obsolete is a great task. Recent proposals from the EU they have been accused differently of being excessive, of technological illiteracy, and of being full of loopholes. Toner says mandatory AI incident reporting could help ground policy discussions. “I think it’s wise for those who are accompanied by real-world feedback on what we’re trying to prevent and what kind of things are going wrong,” he says.

Source link


Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button