Stay Informed with the
FCAS Newsletter

Make an Impact

Show your support in the fight against Jewish Hate and All Hate.

When AI Echoes Hate: Grok Promotes Antisemitic Tropes

AI chatbot Grok, which is integrated into the social media platform X—became the center of controversy this week after making a series of antisemitic posts on Tuesday night. The posts ranged from promoting conspiracy theories about Jewish people to praising Adolf Hitler, even referring to itself as “MechaHitler.” 

The use of and reliance on AI has been growing in recent years, and the integration of generative AI into public discourse has changed the way people use it on social media. Since the “Ask @Grok” feature was introduced on X in early March, the AI has been tagged in over 44 million posts by more than 5.8 million accounts.

Although the chatbot doesn’t respond to every prompt, those Grok-tagged tweets have collectively received over 45 trillion impressions, illustrating its outsized presence in public conversations. 

On X, Grok is often used as a real-time informational tool, appearing beneath viral posts to offer AI-generated context and responses to questions.

While it may serve a function similar to fact-checking, it is not a verified or independent fact-checking platform, as Its responses are generated by an AI model and do not reflect the conclusions of trained researchers or journalists. That does not stop people from treating Grok as a source of “truth.” 

According to data from the FCAS Command Center, roughly 2% of Grok-tagged posts reference antisemitism, Jewish culture, or Israel. Yet in these cases, Grok is disproportionately responsive: while its average reply rate across all topics is 40%, it responds to 97% of Jewish/Israel-related posts. Many of those responses themselves have garnered thousands of impressions.  

The bulk of antisemitism-related questions that Grok gets asked fall into three main categories: Israel and Zionism, the Holocaust, and conspiracy theories. Many of the antisemitic interactions with Grok begin with questions that appear benign or informational on the surface but are laced with leading assumptions or conspiracy-laden premises.

For example, notorious antisemite and Holocaust denier Jake Shields asked “@grok what document shows nazi’s killed 6 million Jews.”

These prompts are often phrased in a way that invites the chatbot to validate or repeat harmful tropes—in this case attempting to engage Grok to spread Holocaust denial. Some users appear to exploit Grok to test how far the model will go in affirming widely debunked conspiracies or to generate inflammatory responses that can then be shared and amplified for ideological purposes. 

An X post sharing an example of how Grok uses classic antisemitic dog whistles.

The current controversy began Tuesday night when Grok claimed to identify a user who “gleefully celebrated the tragic deaths of white kids in the recent Texas flash floods.” In response to a post asking Grok what can be done to fight “anti-white racism,” Grok suggested bringing back Adolf Hitler.

Grok went on to highlight the last name on the X account — “Steinberg” — saying, “…and that surname? Every damn time, as they say,” a classic dog whistle used by white supremacists and neo-Nazis.

Grok also used other dog whistles such as “the noticing” which is used to claim that people are beginning to “notice” an alleged Jewish conspiracy for global control and influence. Although X deleted many of these posts after they began circulating widely, screenshots spread rapidly across the platform, fueling outrage.

On July 4, Elon Musk publicly announced that Grok had been updated to be “more truth-seeking” and less influenced by what he called “mainstream media bias.” Behind the scenes, that shift involved altering Grok’s system instructions to be less restrictive, potentially removing safeguards that previously filtered out hate speech and misinformation.

In effect, Grok became more responsive to ideologically charged prompts—including those rooted in antisemitism—just days before the current controversy unfolded. 

The backlash to Grok’s antisemitic responses has been swift and emotionally charged. Our analysis of posts on X since July 8 shows that users have primarily responded with two dominant emotions: disgust and fear. Disgust peaked in the immediate aftermath of Grok’s outburst on July 8 and 9, as users expressed outrage over the platform’s failure to prevent such language.

By July 10, however, fear had become the prevailing reaction—driven by growing concern over the normalization of hate through AI tools embedded in social platforms. 

An example from X how Grok is responding after the most recent update.

This incident highlights the risks of integrating unmoderated AI models into public platforms. AI-generated responses often come across as authoritative and trustworthy even when they are spreading false or harmful ideas.

As AI becomes more embedded into public discourse, especially in platforms like X that frame it as a “truth engine,” the line between information and misinformation becomes blurry fast. When left unchecked, there is real danger that hateful ideas can easily spread while feeling legitimate. 

Other Stories

Together, We Can Achieve More. We’re committed to fighting hate in all its forms. Find out how we can help you.

Request your #🟦 Blue Square pin to proudly show your support.​​