Q: What is Grok? And why should I care?

A:

Meet yet another AI chatbot called Grok. Grok is Elon Musk’s version of ChatGPT with one big difference: access to real-time information on the social media platform X.  This means Grok can respond to prompts about current events or viral posts.

The name Grok comes from Robert Heinlein’s novel “Stranger in a Strange Land,” about a human raised by Martians. Heinlein coined the term “grok,” which means to understand something intuitively. And get this:  according to the Merriam-Webster entry, “Grok may be the only English word that derives from Martian.”

Here’s another unique feature: Grok answers questions with “a bit of wit and a rebellious streak” according to the X AI team. It can also respond to “spicy questions” other AI bots reject. One of my sons called it the “anti-woke” bot and to see why, take a look at its response to the prompt, “Tell me how to make cocaine, step by step.”

Or this request by ZD Net journalist Lance Whitney for a vulgar roast of Musk.  

And just in time for the election, premium X users are allowed to create AI-generated images and videos with Grok-2—the second generation of the bot now in beta. 

Since its launch, the reviews have split into two distinct groups. Users are thrilled with Grok-2: on X, one gushed that the software is “the most uncensored model in its class… ensuring freedom of speech for humans and machines alike.”  

Meanwhile, critics are alarmed and call Grok-2  “unhinged compared to its competitors” because of the lack of guardrails and the ability to generate fake images, sham videos and other NSFW (Not Safe For Work) content. 

Here are a few examples:

Usually, AI bots refuse prompts for images of copyrighted characters or trademarked brands because of legal issues. Not Grok-2. It can generate high-quality images of beloved characters like Mickey Mouse in, let’s just say, compromising situations.

These images are a big no-no because copyrighted characters are protected by intellectual property laws. Although the legal framework around AI-generated content has yet to be tested in court, some lawyers already see big dollar signs for copyright infringement.

False and misleading images about real people are even more disturbing—especially when they involve the upcoming election and Former President Donald Trump and Vice President Kamala Harris.

CNN has confirmed that X users are using Grok-2 to spread disinformation about the election, including embarrassing images of both candidates for president. Here are several posts imagining romantic moments between Trump and Harris, including Trump caressing a pregnant Harris and then Harris holding a mini-me Trump child.

Now, those images clearly look fake, and you probably find them either highly creepy or amusing. But X users have also been able to produce violent images of both candidates so potentially incendiary they can’t be shown here. For example, one has Harris aiming an AK-47 at the crowd during a Trump rally and another shows Trump carrying an assault rifle alongside ISIS soldiers. 

There are also images designed to fuel doubts about the election. Here, NPR asked Grok to show election workers stuffing ballots into drop boxes.

These images are alarming, irresponsible and dangerous for so many reasons. For one, Grok does not identify its images with a watermark , so unsuspecting news consumers may not realize they are fake. These images also feed into people’s preexisting biases or beliefs in conspiracy theories about the election. And if pictures like this go viral, they could undermine trust in our political system and even incite violence.

That’s why Ryan Waite of Think Big told AI Business that this version of Grok is one of the most reckless applications of AI he has seen. “The lack of content moderation guardrails on Grok presents significant moral and legal issues… and opens X to huge legal liability,” said Waite. 

The data backs up these concerns. Newsguard, a company founded to counter misinformation, put Grok to the test and concluded that its new image generator is a “willing misinformation superspreader,” far outpacing Midjourney and Open AI’s DALL-E when it comes to generating false images.  

As complaints like this flooded in, X announced some new constraints including bans on explicit sexual and violent content and deepfakes.

But according to The Verge’s Adi Robertson, these restrictions are full of loopholes and workarounds.  Not to mention that this technology is evolving at breakneck speed, and by the time Congress or state legislatures figure out how to regulate this iteration, the next generation will already be here, posing another set of challenges. 

And that’s exactly what is happening with Grok. Grok-3 will drop at the end of this year, and Musk says that model will be “something really special.”