Tech

The myth of AI Western legislators are wrong

This story originally appeared in The Algorithm, our weekly artificial intelligence newsletter. To be the first to receive similar stories in your inbox, sign here.

While the US and EU may disagree on how to regulate technology, their lawmakers seem to agree on one thing: the West needs to ban AI-based social scoring.

How are they understand Thus, social evaluation is a practice in which authoritarian governments, especially in China, assess people’s trustworthiness and punish them for undesirable behavior, such as stealing or defaulting on loans. It is essentially a dystopian super-account assigned to every citizen.

The EU is currently negotiating a new law calledAI lawwhich would bar Member States and possibly even private companies from implementing such a system.

The problem is that it “virtually bans thin air,” says Vincent Brussi, an analyst at the Mercator Institute for Chinese Studies, a German think tank.

Back in 2014, China announced a six-year plan to create a system that rewards actions that build trust in society and punishes the opposite. Eight years later, a bill has just been released that attempts to codify past pilot social credit programs and guide future implementation.

There were some controversial local experiments, for example, in the small town of Rongcheng in 2013, when each resident was given an initial personal credit score of 1,000, which could be increased or decreased depending on how their actions were judged. Now people can opt out, and the local government has removed some controversial criteria.

But they are not widespread in other countries and do not apply to the entire population of China. There is no nationwide all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explains, “The reality is that this appalling system does not exist, and the central government does not seem to have much desire to build it.”

What has been implemented is mostly quite low-tech. It is “a combination of attempts to regulate the financial and credit industry, allow government agencies to share data with each other, and promote state-sanctioned moral values,” writes Zeyi.

Kendra Schaefer, partner at Trivium China, a Beijing-based research and consulting firm that compiledreporton this issue for the US government, could not find a single case where data collection in China resulted in automatic sanctions without human intervention. South China Morning Postfoundthat in Rongcheng, human “information collectors” went around the city and recorded people’s bad behavior with pen and paper.

The myth arose because of the Sesame Credit pilot program, developed by the Chinese technology company Alibaba. It was an attempt to gauge people’s creditworthiness using customer data at a time when most Chinese didn’t have credit cards, Brucey says. The effort was combined with the social credit system as a whole in what Brussi describes as a “Chinese whisper game”. And misunderstanding took on a life of its own.

The irony is that while American and European politicians portray this as a problem for authoritarian regimes, systems of ranking and punishing people already exist in the West. Algorithms designed to automate decision making are being introduced en masse and used to deny people housing, jobs and basic services.

For example, in Amsterdam, the authorities used an algorithm torank of young peoplefrom disadvantaged areas depending on their likelihood of becoming a criminal. They argue that the goal is to prevent crime and help offer better and more targeted support.

But in fact, rights groups argue, it has increased stigma and discrimination. Young people on this list face more police stops, home visits from authorities, and stricter oversight from schools and social workers.

IIt’s easy to argue against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive for a common understanding of AI governance, they might be better off looking closer to home. Americans don’t even have a federal privacy law that offers basic protections against algorithmic decision making.

It is also critical that governments conduct honest and thorough reviews of how governments and companies are using AI to make decisions about our lives. They may not like what they find, but that makes it even more important for them to seek.

Deeper Learning

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Research company OpenAI has created an AI that swallowed 70,000 hours of video of people playing Minecraft to play the game better than any AI before. This is a breakthrough in a powerful new technique called simulation learning, which can be used to train machines to perform a wide range of tasks by watching people do them first. It also raises the possibility that sites like YouTube can become a vast and untapped source of training data.

Why is it important: Simulation training can be used to teach AI how to operate robotic arms, drive cars, or navigate websites. Some people, like Meta Chief AI Officer Yann LeCun, believe that watching videos will ultimately help us train AI with human-level intelligence. Read Will Douglas Haven’s story here.

Bits and bytes

Game AI Meta can form and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around the map. The game requires players to talk to each other and notice when others are bluffing. The Meta’s new AI, named Cicero, managed to trick humans in order to win.

This is a big step forward for AI, which can help solve complex problems, such as route planning in conditions of congested traffic and contracting. But I won’t lie — the idea that AI can fool humans so successfully is also alarming. (MIT Technology Review)

We may run out of data to train AI language programs

The trend towards ever larger AI models means we need even larger datasets to train them. The problem is that we could run out of suitable data by 2026, according to a paper by researchers at Epoch, an AI research and prediction organization. This should encourage the AI ​​community to find ways to do more with existing resources. (MIT Technology Review)

Stable diffusion 2.0 released

Stable Diffusion of Open Source Artificial Intelligence Gainedbig facelift, and its results look much sleeker and more realistic than before. It might even doArms. The pace of development of Stable Diffusion is breathtaking. Its first version was only launched in August. Most likely, next year we will see even more progress in the field of generative AI.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button