Musk's Grok signs $200m deal with Pentagon
Digest more
1don MSN
The latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development.
When Elon Musk’s Grok AI chatbot began spewing out antisemitic responses to several queries on X last week, some users were shocked.
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
Elon Musk has just unveiled “Companions,” a new feature for his AI chatbot, Grok, that allows users to interact with AI personas. These include Ani, a gothic anime girl who communicates with emojis, flirtatious messages, and facts, as well as Rudy, a friendly red panda.
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Responding to several user inquiries, Grok gave detailed instructions on how to rape and break into the home of Will Stancil, a left-leaning commentator.
This marks a shift in the AI wars. Instead of just competing on intelligence or reasoning, Musk wants Grok to feel more personal, more addictive, and more human, or at least more fun. But the reactions online show that people are split. SuperGrok now has two new companions for you, say hello to Ani and Rudy! pic.twitter.com/SRrV6T0MGT
Elon Musk merges artificial intelligence with anime in the latest SuperGrok update, featuring virtual companion Ani
The unusual behavior of Grok 4, the AI model that Musk's company xAI released late Wednesday, has surprised some experts.