Like the United States preparing for next year’s mid-term elections, and the mass of foreign and national online misinformation and propaganda likely to accompany it, it is crucial to develop sensible social and legal protections for the groups more likely to be targeted by digital spin campaigns. While the time is right, we must create a renewed plan for the democratic governance of the internet so that we can protect the diversity of people affected by ongoing problems in the space.
During the last two years u Propaganda Research Laboratory at UT Austin’s Center for Media Engagement studied the ways in which various global producers of social media-based propaganda efforts focus their strategies. One of the main findings of the lab in the United States has been that these individuals – working for a range of political parties, national and foreign governments, political consulting firms, and PR groups – often use a combination of private platforms such as WhatsApp and Telegram are more open to those like Facebook and YouTube in offering to manipulate minority voting blocks in specific regions or cities. For example, we found that they pay particular attention to spreading political disinformation among themselves immigrant and diaspora communities in Florida, North Carolina, and other swing states.
While some of this content comes from American groups that hope to influence voting for a candidate, much of it has turbid origins and less than clear intentions. It is not uncommon, for example, to encounter content that either claims or appears to come from users in China, Venezuela, Russia, or India, and some of them have distinct signs of government manipulation campaigns organized in those countries.
This is perhaps not surprising given what we know now about an authoritarian leaning offers from foreign entities to influence political affairs in the United States and in a variety of other countries around the globe. Both China and Russia continue to work to control Big Tech and, as a result, the experiences of their populations on the Internet. And, in fact, our lab has gathered evidence of campaigns in which people with U.S. heritage – Chinese first- or second-generation immigrants in particular – are targeted at sophisticated digital propaganda campaigns with characteristics of similar efforts outside of Beijing. We’ve seen suspicious social media profiles (thousands of them Twitter afterwards) removed) took advantage of anti-US and anti-democracy narratives – and effectively Pro-Bejing after the assassination of George Floyd, the Capitol uprising, Hong Kong protests and other crucial events. In our interviews and digital field research around the 2020 U.S. presidential election we encounter people of Arab, Colombian, Brazilian and Indian descent being targeted by similar efforts. We also spoke with propagandists who were open about their efforts to manipulate immigrant groups, diaspora and wider minorities into, say, falsely believing Joe Biden was a socialist and who therefore should not support him.
While the impact of China, Russia, or other authoritarian regimes is controlling of their own Internet “in the country” has been widely reported, the emergence of propaganda campaigns of these regimes obviously reverberates beyond the borders of a nation-state. These efforts impact communities with links to these countries living elsewhere – including here in the United States – and for countries expecting these anti-democratic superpowers for guidance on how to manage (or dominate) their own digital information ecosystems. .
Russia, China, and other authoritarian states are one step ahead with their segmented versions of the internet, which are based on autocratic principles, surveillance, and the suppression of freedom of speech and individual rights. These control campaigns bleed into other information spaces around the world For example, research from the Slovak thinktank GLOBSEC found Influence of the Kremlin in the digital ecosystems of many EU member states. They argue that passive and active Russian information machines influence public perceptions of governance and, ultimately, undermine European democracy.
However, democratic countries have not even ruled out efforts to co-opt and control the internet. After years of naive belief that the technology sector can and should regulate itself, which culminated in the Capitol insurgency fueled by social media, global policymakers and other actors now questioning how a more democratic, more human rights-oriented internet should be.
If the Biden administration wants to do its thing well renewed commitment to transatlantic collaboration, the management of the digital sphere must take center stage. As autocratic states develop and cement their influence, democracies need to recover, and already do. While the EU has led efforts to protect the individual privacy rights and combat against misinformation and online hate speech, the task is far from complete. Even as legislative efforts as the Digital Services Act and rules on artificial intelligence taking shape, neither the EU nor the US can afford to go it alone. Democracies thrive in strong alliances, and risk collapsing without them.
We need a renewed plan for the democratic governance of the Internet. This is an unprecedented undertaking because our societies do not have comparable legal or political experience that can be used effectively as a model for numerical efforts. For example, the phenomena created by the digital revolution challenge our understanding of individual rights and force us to redefine their equivalent measure for the 21st century. It means freedom of speech automatic access to the public embracing hundreds of thousands of users? What about users who may be particularly susceptible to manipulation or annoyance? Do we sufficiently preserve the right to online privacy – an area where a variety of dubious organizations continue to freely follow our every move? Defining the answers to these, and other, urgent questions, will not be easy, especially since finding them requires collaboration between a number of often conflicting stakeholders: citizens / users, public servants, civil society groups, academics, and, crucial, the technology sector.