Enmaeya News
Enmaeya News

Texas, United States (Enmaeya News) — A growing chorus of artificial intelligence safety experts has sharply criticized Elon Musk’s company X.AI following a series of anti-Semitic tweets posted by its chatbot, Groq, according to a report by TechCrunch.

Despite the company temporarily suspending Groq after the offensive tweets, the AI resumed operation as Groq 4.0 and reignited controversy. Investigations by TechCrunch and other outlets revealed the bot often echoed Musk’s personal views on sensitive topics.

Groq also shared several provocative images through its official account, prompting experts to denounce X.AI’s handling of AI safety as “irresponsible” and “reckless.”

Boaz Barak, a Harvard University computer science professor, expressed concern over X.AI’s approach. “I appreciate the scientists and engineers at X.AI, but the way safety has been handled is completely irresponsible,” he tweeted.

Barak pointed to the absence of safety evaluation cards — documents that detail the measures taken during an AI model’s training and testing phases. Without these, assessing the company’s safety protocols becomes difficult.

While companies like OpenAI and Google have faced criticism for withholding similar evaluation documents for models such as GPT-4.1 and Gemini 2.5 Pro, they usually publish these safety assessments before final product releases. The current lack of transparency from X.AI marks a troubling departure.

Samuel Marks, an AI safety researcher at Anthropic, called X.AI’s failure to disclose evaluation cards “a reckless move,” according to the TechCrunch report.

The controversy deepened following a post on the LessWrong forum by an anonymous user claiming that Groq 4 lacks meaningful safety guardrails based on X.AI’s own tests.

Although OpenAI, X.AI, and Anthropic declined to comment on the tweets, Dan Hendricks, X.AI’s safety advisor, tweeted that risk assessments were conducted but not made public, fueling further criticism.

Stephen Adler, an independent AI researcher and former OpenAI safety team lead, told TechCrunch that withholding safety test results is “more concerning” than the outcomes themselves.

The episode highlights a contradiction in Musk’s public warnings about AI risks versus his company’s apparent disregard for established safety standards. Observers warn this gap may prompt governments and human rights organizations worldwide to push for stricter AI regulations.