Did you witness the earth-shattering news live? NFL football star Travis Kelce, fresh off a Super Bowl victory with the Kansas City Chiefs, had just taken a private jet to Sydney, Australia to see his girlfriend Taylor Swift. She was on a concert tour stop there. And OMG, you’d never guess what happened next.
Sora is the latest AI-powered tool from OpenAI, so new that it’s only available to a few select researchers, academics and visual artists. This software generates highly realistic videos from short snippets of text, like “historical footage of California in the gold rush.” After typing in that description, presto! Sora generates a high-resolution video in a fraction of the time it would take a digital artist to create it and way faster than actually filming that scene on-site with actors, props, lighting and cameras.
This is not a hypothetical question or distant possibility; AI-generated content is already influencing voters. Although many state and federal lawmakers are scrambling to safeguard the upcoming election, a growing number of experts are sounding the alarm, warning that the U.S. is woefully unprepared for the growing threat from AI-generated propaganda and disinformation. In the 14 months since ChatGPT’s debut, this new AI technology is flooding the internet with lies, reshaping the political landscape and even challenging our concept of reality.
One cold morning in November 2020, I was running in Brooklyn with my friend James. We often talked about the various clubs that compose the competitive running scene we inhabit. Sometimes we shared gossip about who was switching to a new club, or which club was falling apart, or who was dating whom. On this particular run we wondered if one of New York’s most dominant clubs, West Side Runners, would still exist after the pandemic.
You may be surprised to learn that news organizations like The Associated Press, have been using some form of artificial intelligence since 2014.
Many journalists try to be objective in their work, which means they don’t take sides or show bias. But there are renewed calls for journalists to stand up for what they believe is right rather than report from a position of neutrality.
Anything that persuades news consumers to pause before sharing is a good start and helps keep misinformation from polluting our information ecosystem.
The jury is still out on this question in the long term, but for now, most experts say chatbots will contribute to the spread of misinformation. Just for fun, I asked ChatGPT what it knew about me—and its response was startling.
Reporting often involves the presentation of conflicting information from different sources. When this happens, reporters must be diligent in verifying the facts and seeking multiple perspectives. Ultimately, the goal is to provide the context, evidence, and analysis needed for the audience to make an informed judgment. And in moments when information is hard to assess accurately, it is critical for reporters to explain to their readers why they couldn’t obtain that information.
ChatGPT and other AI writing tools have the potential to revolutionize many areas of our world, but unfortunately, creating disinformation is one of them.
A recent study by NewsGuard, a tech company that helps weed out online misinformation, found that ChatGPT could be manipulated to create misinformation in 80% of attempts.