Tech

Collective data rights can prevent big technology from erasing privacy

Individual should not fight for their data privacy rights and be responsible for any consequences of their digital actions. Consider an analogy: people have the right to safe drinking water, but they are not invited to exercise this right by checking the quality of the water with a pipette every time they drink from the tap. Instead, regulatory agencies act on behalf of everyone to ensure that all of our water is safe. The same must be done for digital privacy: it’s not something the average user is, or should expect to be, personally competent to protect.

There are two parallel approaches that must be pursued to protect the public.

One is better to use class or group actions, otherwise called collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament has passed a measure this requires all 27 EU Member States to implement measures that allow collective redress actions throughout the region. Responding to the United States, the EU has stronger laws that protect consumer data and promote competition, so class or group action processes in Europe can be a powerful tool for lawyers and activists to enforce. large technology companies to change their behavior even in cases where the damage to the person will be very low.

Class action lawsuits have often been used in the United States to seek financial damages, but they can also be used to force policy and practice changes. They could work hand-in-hand with campaigns to change public opinion, particularly in consumer cases (e.g., forcing Big Tobacco to admit to the link between smoking and cancer, or paving the way for belt laws). car safety). They are powerful tools when there are thousands, if not millions, of similar individual damage, which come together to help prove causation. Part of the problem is getting the right information to ask the cause in the first place. The government’s efforts, such as a lawsuit brought against Facebook in December by the The Federal Trade Commission (FTC) is a group of 46 states, are crucial. As technology journalist Gilad Edelman says, “According to lawsuits, erosion of user privacy over time is a form of harm to consumers – a social network that protects less user data. it’s an inferior product – consulting Facebook from a simple monopoly to an illegal one ”. In the United States, like the New York Times announced recently, private trials, including class actions, often “rely on evidence discovered by government investigations.” In the EU, however, the opposite is true: private lawsuits can open up the possibility of regulatory action, which is limited by the gap between EU-wide laws and national regulators.

Which brings us to the second step: a little-known French law 2016 called the Bill of the Digital Republic. U Bill of the Digital Republic it is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public sector algorithmic systems. But it provides a sketch for what future laws might look like. He says the source code behind such systems should be made available to the public. Anyone can request that code.

Importantly, the law allows defense organizations to request information about the operation of an algorithm and the source code behind it, even if they do not represent a specific individual or claimant who is allegedly injured. The need to find a “perfect complainant” who can prove wrong in presenting a case makes it very difficult to address the systemic issues that cause damage to collective data. Laure Lucchesi, the director of Etalab, a French government office tasked with overseeing the project, says the focus of the law on algorithmic liability was ahead of its time. Other laws, such as the General European Data Protection Regulation (GDPR), focus too much on individual consent and privacy. But both the data and the algorithms have to be regulated.

The need to find a “perfect complainant” who can prove wrong in presenting a case makes it very difficult to address the systemic issues that cause damage to collective data.

Apple He promises in an advertisement: “Right now, there is more private information on your phone than in your home.” Your locations, your messages, your heart rate after a run. These are private things. And they belong to you. ”Apple reinforces the fallacy of this individualist: by failing to say that your phone holds more than just your personal data, society dispels the fact that truly valuable data comes from your interactions with your suppliers. of services and others.The notion that your phone is the digital equivalent of your archive is a convenient illusion.Companies care little about your personal data, which is why they can pretend to lock it in a box .The value lies in the inferences drawn from your interactions, which are also stored on your phone — but which data does not belong to you.

The acquisition of Fitbit by Google is another example. Google promises to “not use Fitbit data for advertising,” but the lucrative forecasts that Google needs don’t depend on individual data. As one group of European economists arguesd in a recent paper published by the Center for Economic Policy Research, a think tank in London, “it is enough for Google to correlate aggregate health outcomes with non-health outcomes even for a subset of Fitbit users who have not chosen to some use of their data, then predict the health results (and therefore potential destination announcements) for all non-Fitbit users (billions of them). ”The Google-Fitbit agreement it is essentially a group data agreement. It places Google in a key market for health data while allowing it to triangulate different data sets and make money from the inferences used by the health and insurance markets.

What policy makers need to do

Bills have sought to fill this gap in the United States. In 2019 Senators Cory Booker and Ron Wyden presented one Algorithmic Act of Responsibility, which was set up after Congress. The act would have required companies to conduct algorithmic impact assessments in certain situations to verify bias or discrimination. But in the United States this crucial issue is more likely to be addressed first in laws applicable to specific sectors such as health care, where the danger of algorithmic bias has been magnified by the disparate impacts of the pandemic on groups. of the United States population.

At the end of January, the Public Health Emergency Privacy Law it was reintroduced to the Senate and the House of Representatives by Senators Mark Warner and Richard Blumenthal. This act will ensure that data collected for public health purposes are not used for any other purpose. Prohibit the use of health data for discriminatory, unrelated or intrusive purposes, including commercial advertising, electronic commerce, or efforts to control access to employment, finance, insurance, housing, or education. It would be a great start. Going further, a law that applies to all algorithmic decisions must, inspired by the French example, focus on strict accountability, strong regulatory oversight of data-based decisions, and the ability to verify and inspect. algorithmic decisions and their impact on society.

Three elements are needed to ensure tough accountability: (1) clear transparency about where and when automated decisions are made and how they affect individuals and groups, (2) the right of the public to offer meaningful input and call to those in authority to justify their decisions, and (3) the ability to apply sanctions. Crucially, policy makers must decide, as has recently been suggested in the EU, what constitutes a “high-risk” algorithm that should respond to a higher standard of scrutiny.


Clear Transparency

The focus should be on public scrutiny of automated decision making and the types of transparency that lead to accountability. This includes revealing the existence of algorithms, their purpose, and the training data behind them, as well as their impacts – whether they have led to disparate results, and on what groups you are.

Public participation

The public has the fundamental right to call those in power to justify their decisions. This “right to ask for answers” ​​should not be limited to consultative participation, where people are asked for their input and officials pass on. It should include enabled participation, where public input is sent prior to the launch of high-risk algorithms in both the public and private sectors.

Sanctions

Finally, sanctioning power is the key to these reforms to succeed and to realize accountability. It should be mandatory to establish control requirements for the targeting, verification and curation of data, to provide reviewers with this basic knowledge, and to enable supervisory bodies to enforce sanctions, not just to remedy them. damage after the fact, but to prevent it.


The issue of collective damage driven by data touches everyone. An Emergency Public Health Emergency Privacy Act is a first step. Congress should then take lessons from the implementation of that act to develop laws that focus specifically on collective data rights. Only through such action can the United States avoid situations where inferences from data companies affect people’s ability to access housing, employment, credit, and other opportunities for employment. years to come.


Source link

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button