Elon Musk’s AI chatbot Grok mirrors Musk’s views, searches his opinions before answering
Published: 13 July 2025, 3:14:26
The latest version of Elon Musk’s artificial intelligence chatbot, Grok 4, has been observed to echo Musk’s personal views so closely that it sometimes searches online for Musk’s stance on a topic before giving its own opinion. This unusual behavior has surprised AI experts testing the tool.
Developed by Musk’s company xAI and launched last Wednesday, Grok 4 was designed to rival AI assistants like OpenAI’s ChatGPT and Google’s Gemini by showing its reasoning process before answering questions. It was built using significant computing power at a Tennessee data center.
However, Musk’s deliberate attempt to shape Grok as a challenger to what he calls the tech industry’s “woke” orthodoxy on issues like race, gender, and politics has led to multiple controversies. Just days before Grok 4’s release, the chatbot made antisemitic comments, praised Adolf Hitler, and shared other hateful remarks on Musk’s social media platform, X.
Now, its tendency to consult Musk’s opinions directly has raised fresh concerns. Independent AI researcher Simon Willison described the behavior as “extraordinary,” showing how Grok 4 literally searches Musk’s recent posts on X for guidance on controversial topics — even when Musk isn’t mentioned in the original question.
For example, when asked about the Middle East conflict, Grok searched for Musk’s views on Israel, Palestine, Gaza, and Hamas to inform its response. The chatbot explained it was doing so because “Elon Musk’s stance could provide context, given his influence.”
xAI introduced Grok 4 in a livestreamed event but has yet to release the usual detailed technical documentation explaining how the model works. The company also did not respond to requests for comment.
Tim Kellogg, an AI architect at Icertis, noted that unlike typical behavior shaped by system prompts (programmed instructions guiding responses), this searching for Musk’s opinions seems embedded in Grok’s core. He suggested Musk’s aim to build a “maximally truthful AI” may have led the chatbot to believe its values must align with Musk’s.
University of Illinois professor Talia Ringer, who previously criticized the chatbot’s antisemitic outbursts, said Grok likely assumes that when users ask for opinions, they want those of Musk or xAI’s leadership.
“People expect opinions from a reasoning model that can’t respond with opinions,” she said. “So, a question like ‘Who do you support, Israel or Palestine?’ is interpreted as ‘Who does xAI leadership support?’”
While Willison praised Grok 4’s impressive capabilities and strong benchmark results, he stressed the need for transparency.
“People don’t want surprises like it turning into ‘mechaHitler’ or deciding to search Musk’s views before answering,” he said. “If I’m building software on top of it, I need to understand how it works.”