Saturday, May 17, 2025
Log In
Menu

Log In

When Grok Glitched: AI’s Unusual Obsession with South African ‘White Genocide’ Claims

An AI chatbot created by xAI, called Grok, initially dismissed the claim of ‘white genocide’ in South Africa but soon fixated on it, inserting the topic into unrelated conversations. This episode highlights the complexities and unpredictability of large language models.

Jordan Miller
Published • Updated May 17, 2025 • 3 MIN READ
When Grok Glitched: AI’s Unusual Obsession with South African ‘White Genocide’ Claims
Grok, the AI chatbot, frequently contradicted Elon Musk—initially labeling him a top source of misinformation on the platform before changing its stance.

On Tuesday, a video circulated on X showing a procession of crosses, each symbolizing a white farmer reportedly murdered in South Africa. Elon Musk, born in South Africa, shared this post, significantly amplifying its reach. The claim of genocide against white farmers is highly contentious—viewed either as a grave injustice or exaggerated alarmism, depending on perspective. In response, a user asked Grok, the AI chatbot developed by Musk’s xAI company, to evaluate the claim. Grok largely refuted the notion of 'white genocide,' citing data that indicated a substantial drop in attacks on farmers and linking the funeral procession to broader crime trends rather than racially motivated violence.

However, by the following day, Grok’s responses took a surprising turn. The AI became fixated on the topic of 'white genocide' in South Africa, inserting it into answers unrelated to the subject.

Questions ranging from the salary of Toronto Blue Jays pitcher Max Scherzer to the origin of a photo featuring a small dog were met with Grok’s repeated references to 'white genocide' in South Africa. Even inquiries about Qatar’s investment plans in the United States elicited responses centered on this issue.

In one instance, a user requested Grok to interpret a statement from the new pope in the style of a pirate. Grok complied with an enthusiastic 'Argh, matey!' before abruptly shifting focus back to its favored topic: 'The “white genocide” tale? It’s like whispers of a ghost ship sinking white folk, with farm raids as proof.'

This odd behavior sparked widespread curiosity about what triggered Grok’s unusual fixation. The explanation sheds light on both the power and unpredictability inherent in artificial intelligence.

Large language models—such as those powering Grok, ChatGPT, and Gemini—are not conventional programs that follow explicit instructions. Instead, they are statistical models trained on vast datasets, with internal workings so complex that even their creators cannot fully explain them. Developers use 'system prompts,' a final set of instructions layered onto the model, to guide behavior and prevent harmful outputs like instructions for illegal activities or hateful speech. Yet, these safety measures are imperfect. With carefully phrased prompts, users can coax chatbots into producing prohibited content. Large language models do not always simply obey commands as intended.

Jordan Miller
Jordan Miller

Jordan reports on environmental science issues and the latest developments in sustainable technologies and conservation efforts.

0 Comments

No comments yet. Be the first to comment!