Responsible AI has a burnout problem

“The loss of just one person has serious consequences for the entire organization,” says Mitchell, because the experience gained by someone is extremely difficult to replace. At the end of 2020, Google fired its ethical AI co-head Timnit Gebra, and fired Mitchell a few months later. Just a few months later, several other members of his responsible artificial intelligence team left.

Gupta says such a brain drain poses a “serious risk” to progress in AI ethics and makes it harder for companies to comply with their programs.

Google last year announced he doubled down on his AI ethics research staff, but has not commented on his progress since. The company told MIT Technology Review that it offers mental health resilience training, has a peer-to-peer mental health initiative, and provides employees with access to digital tools to help them stay mindful. It can also connect them virtually with mental health providers. He did not respond to questions about Mitchell’s time with the company.

Meta said it has invested in perks such as a program that gives employees and their families access to 25 free therapy sessions each year. And Twitter said it offers employee consultations and coaching sessions, as well as training to prevent burnout. The company also has a mental health-focused peer support program. Neither company has said it offers support specifically for AI ethics.

As demand for AI compliance and risk management grows, CTOs need to make sure they are investing enough in responsible AI programs, Gupta said.

Change starts at the top. “Leaders need to talk to their dollars, their time, their resources that they allocate to this,” he says. Otherwise, people working on ethical AI are “doomed to fail.”

Successful responsible AI teams need enough tools, resources and people to work on problems, but they also need leeway, organization-wide connections and the ability to implement the changes they are asked to make, Gupta adds.

Many mental health resources at tech companies focus on time management and work-life balance, Chowdhury said, but more support is needed for people who work on emotionally and psychologically distressing topics. She adds that mental health resources specifically for people working on responsible technology will also help.

“There was no recognition of the implications of working on this sort of thing, and there was definitely no encouragement or encouragement to separate yourself from it,” Mitchell says.

“The only mechanism big tech companies have to deal with the reality of this is to ignore the reality of it.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button