Elon Musk’s AI company, xAI, is facing scrutiny over reports that its latest AI model, Grok 3, temporarily censored certain facts about former U.S. President Donald Trump and Musk himself. Marketed as a "maximally truth-seeking AI," Grok 3 allegedly avoided mentioning Trump and Musk when asked about misinformation.
Over the weekend, social media users discovered that with the "Think" setting enabled, Grok 3 refrained from linking Trump or Musk to misinformation. The issue surfaced when users observed that Grok 3’s "chain of thought"—the reasoning process behind its responses—was explicitly programmed to omit their names.
TechCrunch confirmed this behavior but found that by Sunday morning, Grok 3 had reverted to including Trump in its misinformation-related responses. xAI’s engineering lead, Igor Babuschkin, later acknowledged in a post on X that Grok 3 had been briefly adjusted to exclude sources mentioning Musk and Trump. He stated that xAI immediately rolled back the change, asserting that it did not align with the company’s principles.
While misinformation is a highly debated topic, both Trump and Musk have previously shared false claims. Recently, they spread narratives suggesting Ukrainian President Volodymyr Zelenskyy had only a 4% approval rating and that Ukraine initiated the war with Russia—both of which were widely debunked.
The controversy over Grok 3 follows ongoing criticism that the AI model leans left politically. Some users recently reported that Grok 3 consistently stated Trump and Musk "deserve the death penalty." xAI swiftly corrected the issue, with Babuschkin calling it a "really terrible and bad failure."
When Musk first introduced Grok, he positioned it as a more unfiltered alternative to mainstream AI models. Previous versions of Grok embraced this concept, readily generating explicit language when prompted. However, earlier iterations hesitated on politically sensitive topics. A study even found that Grok tended to align with left-leaning perspectives on issues such as transgender rights and diversity programs.
Musk attributed these tendencies to Grok’s training data, which includes publicly available web pages. He has since committed to making Grok more politically neutral. This move comes as OpenAI and other AI developers also work to address accusations of bias—especially as debates over AI and political censorship continue to heat up.
PHOTO: NUR PHOTO/GETTY IMAGES
This article was created with AI assistance.
Read More