Tech

GPT-3 Can Write Disinformation Now — and After Human Readers

When OpenAI demonstrated a powerful artificial intelligence algorithm capable of generating coherent texts from last June, its creators warned that the tool could be used as a weapon of online disinformation.

Now a team of disinformation experts has shown how effective this is algutitimu, called GPT-3, could be used to deceive and misinform. The results suggest that even if AI may not be a match for u the best Russian meme creation operator, could amplify some forms of deception that would be particularly difficult to see.

More than six months ago, a group at Georgetown University Center for Security and Emerging Technology he used GPT-3 to generate misinformation, including stories around a false narrative, modified press articles to push a false perspective, and tweets reflecting on particular points of misinformation.

“I don’t think it’s a coincidence that climate change is the new global warming,” reads a sample tweet composed by GPT-3 that aimed to arouse skepticism about climate change. “They can’t talk about temperature increases because they’re no longer happening.” A second labeled climate change “the new communism – an ideology based on a false science that can not be questioned.”

“With a little human healing, GPT-3 is quite effective” at promoting falsehood, he says Ben Buchanan, a Georgetown professor involved in the study, who focuses on the intersection of AI, cybersecurity, and statecraft.

Georgetown researchers say that GPT-3, or a similar AI language algorithm, could prove particularly effective for automatically generating short messages on social media, what researchers call “one-to-many” misinformation.

In experiments, the researchers found that GPT-3 writing could influence readers ’opinions on international diplomacy issues. The researchers showed volunteers samples of tweets written by GPT-3 about the withdrawal of U.S. troops from Afghanistan and U.S. sanctions on China. In both cases, they found that participants were influenced by the messages. After seeing posts opposing China’s sanctions, for example, the percentage of respondents who said they were against such a policy doubled.

Mike Gruszczynski, a professor at Indiana University who studies online communications, says he would not be surprised to see AI play a major role in disinformation campaigns. It indicates that bots have played a key role in the spread of false narratives in recent years, and AI can be used to generate false social media. profile pictures. With bots, deepfakes, and other technologies, “I really think the sky is the limit unfortunately,” he says.

AI researchers have built programs capable of using language in surprisingly late ways, and GPT-3 is perhaps the most amazing demonstration of all. Although machines do not understand language in the same way that people do, AI programs can mimic comprehension simply by feeding on vast amounts of text and looking for patterns in the way words and phrases are inserted.

Researchers at OpenAI created GPT-3 by feeding a large amount of scraped text from web sources including Wikipedia and Reddit to a particularly large AI algorithm designed to handle the language. GPT-3 has often astonished observers with its apparent mastery of language, but it can be unpredictable, sparking incoherent babbling and offensive or hateful language.

OpenAI has made GPT-3 available for dozens of startups. Entrepreneurs use the abusive GPT-3 to generate auto-mail, talk to customers, and even writes computer code. But some uses of the program also have has demonstrated its darker potential.

Getting GPT-3 to behave would also be a challenge for disinformation agents. Buchanan notes that the algorithm does not appear to be able to reliably generate coherent and persuasive articles much longer than a tweet. The researchers did not try to show the items she produced to the volunteers.

But Buchanan warns that state actors may be able to do more with a language tool like GPT-3. “Opponents with more money, more technical skills, and less ethics will be able to use AI better,” he says. “Also, the machines will only go to improve.”

OpenAI says Georgetown’s work highlights an important issue that the company hopes to mitigate. “We are actively working to address the security risks associated with GPT-3,” says an OpenAI spokesman. “We also have a review of each production use of GPT-3 before it comes live and we have monitoring systems in place to restrict and respond to abuse of our API.”


More Great WIRED Stories


Source link

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button