Tech

Dumbed Down AI Rhetoric Harms All

When the European The Union Commission has released its regulatory proposal on artificial intelligence last month, much of the U.S. political community celebrated. His praise was at least partly based on the truth: the world’s most powerful democratic states have not sufficiently regulated AI and other emerging technology, and the document marked something of a step forward. Above all, however, the proposal and their responses underscore the confusing rhetoric of democracies over AI.

Over the past decade, stated high-level objectives on AI regulation have often conflicted with the specifics of regulatory proposals, and what should have been fine-tuned are not well articulated in either case. . A coherent and significant progress to develop an attractive international democratic AI regulation, although this may vary from one country to another, begins with the resolution of many contradictions of discourse and insubstantial characterizations.

The EU Commission has announced its proposal as a regulatory point for AI. Executive Vice President Margrethe Vestager he said after its release, “We think this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, he said The proposals “aim to strengthen Europe’s position as a global center of excellence in AI from laboratory to market, to ensure that AI in Europe respects our values ​​and rules, and to exploit the potential of AI for industrial use “.

This is certainly better than many national governments, particularly the United States, stagnating on the rules of the road for companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal control and accountability, both for surveillance in Athens o busi in opera in Malaga, Spain.

But to break down EU regulation as a “leader” just because it is first masks only the numerous numbers in the proposal. This kind of rhetorical leap is one of the first challenges at hand with AI’s democratic strategy.

Of the many “Specific” in the 108-page proposal, its approach to regulating facial recognition is mostly consequential. “The use of AI systems for remote‘ real-time ’biometric identification of individuals in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in human rights. and the freedoms of the persons concerned, “as they may affect private life,” arouse a feeling of constant surveillance, “and” indirectly deter the exercise of the freedom of assembly and other fundamental rights. ” At first glance, these words may signal an alignment with the concerns of very much activists and technology ethicists on harm facial recognition can inflict on marginalized communities and serious risks of mass surveillance.

The commission then stated: “The use of these systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three situations that are completely listed and strictly defined.” That’s where the gaps come into play.

Exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or a terrorist attack; and the detection, location, identification or prosecution of perpetrators. or suspected of crimes “. This language, for all that the scenarios are described as “strictly defined,” offers a myriad of justifications for law enforcement to implement facial recognition as desired. Allowing its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of racist algorithms and often racist and sexist facial racism that activists have for so long. sighted time.

The EU’s privacy watchdog, the European Data Protection Supervisor, it flashed quickly on this. “A stricter approach is needed since remote biometric identification, where AI can contribute to unprecedented developments, presents extremely high risks of profound and undemocratic intrusions into the private lives of individuals,” the statement says. EDPS. Sarah Chander from the non-profit organization European Digital Rights described the proposal to the Verge as “a paint for the protection of fundamental rights.” Others have noted how these exceptions reflect legislation in the United States that on the surface seems to restrict the use of facial recognition but in reality has many broad sculptures.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button