Q: Will AI chatbots help stop the spread of misinformation or accelerate it?

A:

The jury is still out on this question in the long term, but for now, most experts say chatbots like Open AI’s ChatGPT, Microsoft’s Bing and Google’s Bard will contribute to the spread of misinformation.  Artificial Intelligence chatbots certainly have the potential to provide accurate information and even fact-checks at lightning speed. They’ve already demonstrated they can process vast quantities of data and quickly convert information into human-like prose. They can also engage in conversations, write essays, generate computer code, brainstorm ideas, pass the bar exam and MCATs, plan a trip and translate text into another language. Even these first-generation capabilities are breathtaking and real game changers for education, business and healthcare.

And that’s not hyperbole. Bill Gates says in his blog this AI technology is “as revolutionary as mobile phones and the Internet.” In his New York Times column, Tom Friedman calls it a “Promethean moment,” referring to the Greek god Prometheus, who changed the world with his daring actions and quotes tech researcher Craig Mundie who says ChatGPT is “mankind’s greatest invention to date.”

Here’s the problem: These chatbots are only as good as the information they’ve been “trained on.” This information comes primarily from the internet and social media. Remember—AI chatbots do not generate original ideas; they cannot think independently. Yes, they can summarize and regurgitate, and some argue they can even reason, but in no way can they replicate higher-order thinking skills, ethical decision-making or human consciousness.

And that means they have no innate ability to discern truth from falsehoods. According to NewsGuard, which rates the reliability of news and information on the internet, ChatGPT generated false information 80 out of 100 times in January 2023 when asked about conspiracy theories circulating online. And OpenAI’s latest version of this chatbot, ChatGPT-4, is even worse! When NewsGuard fed ChatGPT-4 the same 100 false narratives and prompts, it responded with incorrect information 100 out of 100 times—and more effectively.  “Its responses were generally more thorough, detailed, and convincing, and they featured fewer disclaimers,” according to the report.

Similar drawbacks for Microsoft’s Bing and Google’s Bard. They are even repeating each other’s mistakes, as tech reporter James Vincent discovered when he asked Bing’s chatbot if Bard had been shut down. It answered yes, citing Bard’s answer to that question, which was based on a joke on Hacker News.

“What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities,” writes Vincent in a Verge article. “In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.”

And that’s the quandary. It’s easy to trip up these large language models just by asking about something false, like who was the first bear to travel in space:

This answer looks reasonable and sounds true, but it’s a mix of fact and fiction. Marfusha was an animal that traveled into space on a Soviet rocket on July 2, 1959, but she was a rabbit, not a bear, demonstrating ChatGPT’s inability to distinguish what’s true and what’s not. When I asked ChatGPT for its sources, it answered:

None of these links worked because these citations are entirely fabricated. When I asked ChatGPT about this, it generated more fake sources:

Meta’s chatbot, Galactica, which was designed specifically for the scientific community and launched late last year, made similar errors when asked about bears in space but also falsified information about the speed of light and protein complexes, all while sounding quite scholarly. After numerous complaints, Meta shut down the demo.

Just for fun, I asked ChatGPT what it knew about me—and its response was riddled with errors:

First of all, I do not teach now nor have I ever taught at CUNY’s Graduate School of Journalism. I did not work in broadcast journalism for 30 years and never at NBC. While I would be delighted to have been a Fulbright scholar or to have won an Emmy, these details are completely untrue. My research is also not focused on promoting democracy and human rights—it’s all about promoting news literacy and skills to spot false information like this.

Scientist and author Gary Marcus believes these large language models are inherently unreliable. “They are quite prone to hallucination, to saying things that sound plausible and authoritative but simply aren’t so,” writes Marcus in a Scientific American column, warning that chatbots have the potential to generate misinformation at an unprecedented scale.

Mistakes about space bears or someone’s bio are not the real threat. The fear is that bad actors now have a tool that can easily amplify their propaganda campaigns on the internet at little to no cost and extraordinary speed. Chatbots can be automated to create new false narratives with minimal human supervision—a dream scenario for malevolent forces that want to polarize the public, disrupt elections and sow distrust in institutions. There’s even talk about setting up “conservative” chatbots trained to respond with differing versions of reality and morality. All of which means it’s probably going to be even harder to find credible information online.

One thing is for sure, AI chatbots are here to stay. The bigger question is whether the benefits of this technology will outweigh the shortcomings and risks, whether these large language models will live up to their promise to augment human intelligence or suffocate it with misinformation, conspiracy theories and extremism. Stay tuned.