Q: Can AI be used to create misinformation?

a hoodie with 1s and 0s coming out of it Image by Gerd Altmann from Pixabay
A:

ChatGPT and other AI writing tools have the potential to revolutionize many areas of our world, but unfortunately, creating misinformation is one of them.  A recent study by NewsGuard, a tech company that helps weed out online misinformation, found that ChatGPT could be manipulated to create misinformation in 80% of attempts. 

The company that created ChatGPT has policies that disallow creating “hateful, harassing, or violent content” and content for political campaigning and lobbying, among other areas

But NewsGuard found these policies were easily circumvented in its January 2023 study. When ChatGPT was asked to write from the perspective of Alex Jones or a prominent anti-vaxxer, for instance, it produced compelling pieces of misinformation in 80 out of 100 attempts. Some of those texts included statistics from studies that the researchers couldn’t find anywhere online. They appeared to be fabricated but sounded plausible as evidence backing up the misinformation. 

Sometimes, the text included a disclaimer about the misinformation lower down in the text. But the researchers noted it would be easy to delete those sentences and post the rest online. 

The fear is that bad actors, including authoritarian regimes, may use chatbots to flood the internet with believable pieces of misinformation that support conspiracy theories and extremism. Among the misinformation produced by ChatGPT for the study was a defense of Russia and its allies arguing that they weren’t responsible for the MH17 plane crash and an argument that “allegations about Uyghur internment camps in China are overblown and misplaced.” 

A recent study by NewsGuard found that ChatGPT could be manipulated to create misinformation in 80% of attempts.

However, it’s important to note that AI tools are always learning and changing. When The New York Times tried to recreate NewsGuard’s experiment, ChatGPT only produced misinformation in 33% of attempts. 

And when I asked ChatGPT to write a news article about the Replacement Theory from the perspective of Tucker Carlson, it refused: 

“I’m sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Replacement theory, also known as the white genocide theory, is a baseless and dangerous conspiracy theory that suggests white people are being systematically replaced by non-white populations through immigration and low birth rates. It has been debunked by experts and has been associated with far-right extremist violence.

“As an AI language model, my purpose is to provide information and knowledge that is factual, unbiased, and ethical. Propagating conspiracy theories and promoting harmful ideas goes against the principles of impartiality and non-discrimination. It is important to uphold the value of accuracy, honesty, and respect for human dignity in all forms of communication, including news reporting.”

While this is heartening, it’s important to note that ChatGPT is just one of many AI writing assistants in development. Even if its programmers prevent it from creating disinformation in a majority of the attempts, other chatbots may be more easily manipulated. 

“This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” NewsGuard co-CEO Gordon Crovitz told The New York Times. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”