This is not a hypothetical question or distant possibility; AI-generated content is already influencing voters. Although many state and federal lawmakers are scrambling to safeguard the upcoming election, a growing number of experts are sounding the alarm, warning that the U.S. is woefully unprepared for the growing threat from AI-generated propaganda and disinformation. In the 14 months since ChatGPT’s debut, this new AI technology is flooding the internet with lies, reshaping the political landscape and even challenging our concept of reality.

Protect your sources. It’s the cardinal rule of journalism, and reporters hold this promise of confidentiality in the highest regard. Journalists will protect a source’s identity or withhold details of their conversations when revealing these truths would be morally objectionable or life-threatening to the source. Yet, some journalists have broken this sacred covenant when their own security or safety is on the line.

Ethical journalists act with integrity, seek the truth, and report on it. Telling a story of public interest requires transparency on who provided the information and how the reporter acquired it. Sometimes sources or experts will only speak with a journalist if the conversation is considered on background or deep background. The terms are part of a journalist’s reporting arsenal and should only be used when necessary. But some subjects, particularly those in positions of power, have used this type of attribution to their advantage.

Journalists have a long history of putting citizens first and holding power to account–and that shouldn’t change just because there’s a new owner on the masthead. Bill Kovach and Tom Rosenstiel go even further and say in a Neiman Reports post that this responsibility is “social obligation that can actually override their employers’ immediate interests at times.”