Tech

The Department of Defense publishes AI ethics guidelines for technical contractors.

The purpose of the guidelines is to ensure that technical contractors adhere to existing Department of Defense regulations. ethical principles of AIsays Goodman. The Department of Defense announced these principles last year after a two-year study commissioned by the Defense Innovation Council, an advisory group of leading technology researchers and business leaders set up in 2016 to spark the Silicon Valley spark in the US military. The board of directors was chaired by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Roos, director of the Computer Science and Artificial Intelligence Laboratory at MIT.

However, some critics question whether the work promises any meaningful reforms.

During the study, the board consulted a number of experts, including sharp critics of the military’s use of AI, such as the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven protests.

Whittaker, who is now faculty director at New York University’s AI Now Institute, is unavailable for comment. But according to Courtney Holsworth, a spokesman for the institute, she attended one meeting where she argued with senior board members, including Schmidt, about the direction it was headed. “She was never consulted on the merits,” Holsworth says. “Claiming that it was can be interpreted as a form of ethical laundering, in which the presence of dissent is used for a small part of a lengthy process to assert that a given outcome has received broad support from relevant stakeholders.”

If the DoD does not have broad support, can its guidelines continue to build confidence? “There will be people who will never be satisfied with any set of ethical principles developed by the Department of Defense, because they consider this idea to be paradoxical,” says Goodman. “It is important to be realistic about what can and cannot be done in the guidelines.”

For example, the manual is silent on the use of lethal autonomous weapons, a technology that some campaigners believe should be banned. But Goodman notes that the rules governing such technologies are being adopted higher up the chain. The goal of this guide is to make it easier to create AI that meets these guidelines. And part of that process is to clearly articulate any third party concerns. “Applying these guidelines correctly is a decision not to use a specific system,” says Jared Dunnmon of DIU, who co-authored it. “You might think this is a bad idea.”


Source link

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button