A:
You may be surprised to learn that news organizations like The Associated Press, have been using some form of artificial intelligence since 2014. AP’s business desk was the first to try it out for pro forma stories on corporate earnings that are vital to the financial markets but tedious to write. Thanks to artificial intelligence, the data from earnings reports and financial statements could be automatically fed into a database, and then the AI technology known as Wordsmith took care of the rest, fully automating earnings stories for all publicly traded companies in the U.S. That increased the business desk’s output by a factor of 10, which didn’t go unnoticed.
Soon, the sports desk jumped in, applying a similar technology to preview and recap athletic events around the world, generating stories that would never have been written because there simply weren’t enough reporters.
Other news outlets have also embraced artificial intelligence. Bloomberg News developed its own “Cyborg” technology that can analyze a financial report in seconds and then spit out a story. Today, nearly one-third of Bloomberg’s content is produced using some form of automated or augmentative artificial intelligence.
The Washington Post’s “Heliograf” is another automated content generation system that gives people updates on local political races, high school sports and even up-to-the-minute notifications on medal winners at the Olympics. Reuters’ AI model is also enormously prolific, scouring government and corporate databases to produce thousands of stories in multiple languages on a daily basis.
But before we go any further, let’s define the three different types of AI used by journalists:
Automated AI: These chatbots spew out generally formulaic stories that don’t require much context or analysis, like earnings reports or sports roundups. This type of AI can replace human reporters for repetitive or routine stories.
Augmentative AI: This type of artificial intelligence enhances rather than replaces the work done by human reporters by trawling through databases in real-time and alerting journalists to anything out of the ordinary. It also offers suggestions for headlines and leads, as well as editing and proofreading. Reuters uses an augmentative AI program known as Lynx, which it calls “a digital data scientist-cum-copywriting assistant.”
Generative AI: Aka ChatGPT, Bard and Dall-E and many others. This type of machine learning creates new content that mimics human reporters. It can generate story ideas, write news articles, produce images, audio reports, and even videos.
So, as you can see, there’s no turning back—AI is already entrenched in the news business. If anything, newsrooms are betting big on the promise of these AI tools. According to its website, The Associated Press says it’s committed to integrating AI into all facets of its newsroom:
This strategy led to a groundbreaking deal in July 2023 between The Associated Press and OpenAI—a first between a news outlet and an artificial intelligence company. While AP gets access to the latest OpenAI technology, potentially automating even more of its content, OpenAI gains access to all AP’s stories since 1985 to better train its algorithm and improve its knowledge base.
Sounds like a win-win, but many in the news industry are raising concerns about the growing use of AI, given its documented problems with accuracy, bias and accountability.
The technology and e-commerce news site CNET knows this all too well. It was called out—and publicly humiliated—for failing to disclose that it was using AI to write stories riddled with errors. Many of the posts with the byline “CNET Money Staff” looked just like any other content on the site but contained multiple mistakes about personal finance and even plagiarized sentences that had to be replaced.
CNET issued a public apology and shared the results of an internal audit that concluded, “AI engines, like humans, make mistakes.” That’s quite the understatement since it found errors in more than half of its AI-written stories.
CNET then issued a moratorium on the use of this technology until editorial policies were in place to prevent errors. And in the spirit of transparency, said that future stories written by its AI engine would carry the byline “CNET Money” and a prominent disclosure.
But factual errors are just one source of angst. Since AI learns from data, any bias in that data can find its way into stories. Case in point: When Buzzfeed used AI-generated images for an article about what Barbie would look like in 194 countries around the world, the story went viral, and the backlash was instant. The Twitter universe accused Buzzfeed of racism and perpetuating negative stereotypes–with good reason. German Barbie looked like a Nazi officer; South Sudan’s Barbie sported an automatic rifle as an accessory; Qatar Barbie wore a traditional headdress reserved for men; Argentina’s Barbie had red hair and blue eyes while Thai Barbie had wavy blonde hair. Buzzfeed’s disclaimer that it realized the images “reveal biases and stereotypes that currently exist within AI models” did not absolve the glaring lapse in editorial judgment, and it eventually deleted the article.
Another challenging issue is what to do when AI defames someone—who is responsible? And this isn’t just theoretical. ChatGPT has already falsely accused a well-known law professor of inappropriately touching a student on a class trip to Alaska. The chatbot named the professor, hallucinated all the details and cited a March 2018 Washington Post story–that doesn’t exist–as evidence. Despite the potential harm to this professor’s reputation, legal experts are not sure how a court might rule if he tried to sue OpenAI for libel.
The latest batch of generative AI can also produce misinformation and deep fakes in such prodigious quantities that finding fact-based information online could become impractical. If that happens, who is accountable?
Many industry watchers view these problems as growing pains that will be worked out. Even those who are more skeptical admit journalism written by generative AI is inevitable. But like the many disruptions to the news industry that preceded this one, the future will be determined by how journalists navigate this brave new world. Although The Associated Press is “looking for ways to deploy artificial intelligence in everything,” so far, that does not include using generative AI to write stories without human supervision:
At this point, artificial intelligence is just another tool in a journalist’s arsenal, albeit a powerful one that can filter vast quantities of data, expand news coverage, and ultimately make newsrooms faster and more productive. It is not a replacement for the guiding principles and ethical standards that underpin journalism–and certainly not a substitute for human journalists. Yet.