Senator Amy Klobuchar (D-MN) unveiled a new law today that aims to finally hold tech companies accountable for allowing misinformation about vaccines and other health issues to spread online.
The bill, dubbed the Health Disinformation Act and sponsored by Senator Ray Luhan, New Mexico, will create an exception to the critical Section 230 of the Internet Act, which has always protected tech companies like Facebook, Google and Twitter from unauthorized access. sued for almost any content people post on their platforms.
Klobuchar’s Law will change that – but only when the social media platform algorithm promotes health misinformation related to the “current public health emergency.” Legislation directs the Secretary of Health and Human Services (HHS) to define health misinformation in these scenarios.
“Features built into technology platforms have helped spread misinformation and misinformation,” says the draft law that Recode has reviewed, “with social media platforms that encourage people to share content in order to receive likes, comments and other positive signals. engagement that rewards engagement, not accuracy. “
The law will not apply in cases where the platform shows messages to people using a “neutral mechanism” such as social media feed, which ranks messages chronologically rather than algorithmically. This would be a big change for the major internet platforms. Nowadays, almost all major social media platforms rely on algorithms to determine what content they show to users in their feeds. And these ranking algorithms are typically designed to show users the content they interact with most – emotionally responsive messages – that can prioritize inaccurate information.
The new bill comes at a time when social media companies are being criticized for spreading misinformation about Covid-19 on their platforms despite their efforts to verify facts or remove the most egregiously harmful health information. Last week, as the number of Covid-19 cases began to rise among unvaccinated Americans, President Biden accused Facebook of “killing people.”»With misinformation about vaccines (a statement which he later partially rejected).
At the same time, large social media companies continue to face criticism from some Republicans who oppose the Chief Health Surgeon’s recent advice to tackle the threat of health misinformation. Conservatives, and especially Senator Josh Hawley (R-MO), have also opposed the White House’s work flagging problematic health misinformation on social media platforms, calling the collaboration “scary things “ and “censorship”.
While the tech giants face bipartisan criticism, Klobuchar’s plan to repeal Article 230 – even in part – is likely to be problematic. Defining and detecting public health misinformation is often difficult, and problems can arise when a government agency decides where to draw this line. At the same time, the court will also have to determine whether the platform’s algorithms were “neutral” and whether health misinformation was promoted – a question that has no simple answer.
In addition, individual users may find it difficult to successfully sue Facebook even if section 230 is partially repealed, as posting health misinformation (as opposed to posting child pornography or defamatory statements, for example) is not illegal.
Freedom of expression advocates warned that repealing Article 230 – even in part – could restrict free speech on the Internet as we know it, because it would force tech companies to have tighter control over what users are allowed to post on the Internet.
However, the introduction of the bill reflects the political will of the Democrats on Capitol Hill to get tech companies to more effectively fight disinformation on their platforms.
“For too long, online platforms have not done enough to protect the health of Americans,” Senator Klobuchar said in a statement. “These are some of the largest and richest companies in the world, and they must do more to prevent the spread of deadly vaccine misinformation.”
Earlier this year, Senator Klobuchar wrote a letter to Senator Luhan to the CEOs of Twitter and Facebook, demanding a more aggressive response to disinformation on his platform, as Recode first reported. The letter cites research by the nonprofit Digital Hate Counteraction Center, according to which 12 anti-vaccination influencers – “a dozen disinformation” – are responsible for 65 percent of anti-vaccine content on Facebook and Twitter.
In responses to these emails, which were seen by Recode, both platforms have largely defended their approach to these influencers, noting that they have taken some action with their accounts. On both platforms, many accounts are still active. While there is limited evidence to show how much Facebook misinformation has exacerbated vaccine insecurity, longtime online vaccine advocates told Recode earlier this year that Facebook’s approach to vaccine content has made it harder for them to work, and that Facebook group content has in particular, it has made some people more opposed to vaccines.
This is also not the first time Congress has tried to repeal part of Section 230. More recently Congress introduced the EARN IT Actwhich would strip IT companies of Section 230 immunity if they do not properly address child pornography on their platforms. The bill, which received bipartisan support at the time of its introduction, is still in Congress. Earlier this year, representatives Tom Malinowski (NJ) and Anna Eshu (Calif.) Also resubmitted their proposal – the American Algorithm Protection Act, which would remove Section 230 platform protection in cases where their algorithms amplify messages. who are involved in international terrorism or violate civil rights.
President Trump also attempted to repeal Section 230 a few days after Twitter began fact-checking his misleading 2020 mailing vote posts.
Despite potential obstacles to their proposal, Senator Klobuchar and Luhan’s bill is a reminder that disinformation-worried lawmakers are increasingly thinking about algorithms and ranking systems that drive engagement in this kind of content.
“The social media giants know this: Algorithms are encouraging people to consume more and more misinformation,” Imran Ahmed, CEO of the Digital Hate Response Center, told Recode in February. “Social media companies have not only encouraged, tolerated and nurtured the growth of this market, they have also become a major source of misinformation.”