New EU law on content moderation, Digital Services Law, includes yearly audit requirements for the data and algorithms used by major technology platforms, and the upcoming EU AI Act could also allow authorities to audit AI systems. The US National Institute of Standards and Technology also recommends AI auditing as the gold standard. The idea is that these checks will act like the checks we see in other high-risk sectors such as chemical plants, says Alex Engler, who studies AI management at the Brookings Institution think tank.
The problem is that there are not enough independent contractors to meet the growing demand for algorithmic audits, and companies are reluctant to give them access to their systems, argue AI accountability researcher Deborah Raji and her co-authors in paper since last June.
That’s what these competitions want to cultivate. It is hoped in the AI community that they will lead more engineers, researchers and experts to develop the skills and experience to conduct these audits.
So far, most of the limited attention in the AI world has come either from scientists or from tech companies themselves. The purpose of these competitions is to create a new sector of AI audit experts.
“We’re trying to create a third space for people who are interested in this kind of work, who want to get started, or who are experts who don’t work for tech companies,” says Rumman Chowdhury, Twitter Group Director of Ethics. transparency and accountability in machine learning, leader Bias Buccaneers. Those people could include hackers and data scientists looking to learn a new skill, she said.
The Bias Buccaneers organizing team hopes it will be the first of many.
Competition like this not only creates incentives for the machine learning community to conduct audits, but also promotes a shared understanding of “how best to audit and what types of audits we should invest in,” says Sarah Hooker, who leads Cohere for AI. non-profit AI research lab.
The effort is “fantastic and absolutely necessary,” says Abhishek Gupta, founder of the Montreal AI Ethics Institute, who was a judge in the Stanford AI Audit Competition.
“The more attention you give to the system, the more likely we are to find places where there are flaws,” says Gupta.