Meet yet another AI chatbot called Grok. Grok is Elon Musk’s version of ChatGPT with one big difference: access to real-time information on the social media platform X. This means Grok can respond to prompts about current events or viral posts.
Here’s another unique feature: Grok answers questions with “a bit of wit and a rebellious streak” according to the X AI team. It can also respond to “spicy questions” other AI bots reject.

This is not a hypothetical question or distant possibility; AI-generated content is already influencing voters. Although many state and federal lawmakers are scrambling to safeguard the upcoming election, a growing number of experts are sounding the alarm, warning that the U.S. is woefully unprepared for the growing threat from AI-generated propaganda and disinformation. In the 14 months since ChatGPT’s debut, this new AI technology is flooding the internet with lies, reshaping the political landscape and even challenging our concept of reality.

Protect your sources. It’s the cardinal rule of journalism, and reporters hold this promise of confidentiality in the highest regard. Journalists will protect a source’s identity or withhold details of their conversations when revealing these truths would be morally objectionable or life-threatening to the source. Yet, some journalists have broken this sacred covenant when their own security or safety is on the line.

Ethical journalists act with integrity, seek the truth, and report on it. Telling a story of public interest requires transparency on who provided the information and how the reporter acquired it. Sometimes sources or experts will only speak with a journalist if the conversation is considered on background or deep background. The terms are part of a journalist’s reporting arsenal and should only be used when necessary. But some subjects, particularly those in positions of power, have used this type of attribution to their advantage.