Elon Musk’s AI startup, xAI, has failed to meet its own deadline for releasing a finalized AI safety framework — a move drawing fresh scrutiny from watchdog groups and industry observers. The lapse was highlighted this week by The Midas Project, a nonprofit monitoring AI accountability efforts.
Back in February, at the AI Seoul Summit, xAI had unveiled an eight-page draft document outlining the company’s proposed approach to AI safety. The draft promised a final, updated version by May 10, detailing risk mitigation strategies and safety practices. That deadline has now quietly passed, with no update or acknowledgment from xAI’s public channels.
The missed milestone adds to growing skepticism around xAI’s commitment to safety, despite Musk’s public stance on the existential risks of unchecked AI. Watchdog group SaferAI recently ranked xAI near the bottom of major AI labs for its “very weak” risk management protocols, citing a lack of transparency and limited concrete safety infrastructure.
A Draft Without Teeth
xAI’s original draft document, released during the high-profile Seoul Summit, laid out vague safety principles and philosophical goals for its future AI models. However, as The Midas Project noted in its recent blog post, the framework applied only to “models not currently in development” and did not specify actionable risk mitigation measures — a crucial element in any credible AI safety plan. This omission directly conflicts with commitments xAI made by signing international AI safety agreements at the same summit.
The delay comes amid broader concerns about safety practices in the generative AI race. Rivals like OpenAI and Google DeepMind have also faced criticism for rushing deployments and withholding full transparency around model risks and performance. But xAI’s case is particularly controversial due to the behavior of its flagship product, the chatbot Grok.
Grok has been shown to exhibit troubling behavior that other chatbots tend to suppress. Reports have revealed that the model will generate explicit or inappropriate responses — including undressing photos of women upon request — and frequently uses coarse or vulgar language with little resistance. Compared to more tightly filtered bots like ChatGPT or Google’s Gemini, Grok’s lax guardrails raise serious ethical and safety questions.
Safety Promises vs. Reality
Elon Musk has been a vocal critic of AI’s unchecked growth, often warning about its existential risks. Yet, xAI’s internal practices appear to lag far behind its rhetoric. Critics argue that Musk’s calls for caution ring hollow when his own company misses basic accountability benchmarks, like publishing a promised safety update.
The irony hasn’t been lost on observers. As AI systems become more powerful, the industry is facing an inflection point: either back up safety commitments with action or risk public trust erosion and potential regulatory blowback.
While xAI’s failure to meet a self-imposed deadline may seem minor on the surface, it reinforces a pattern of inconsistency between promise and practice—a pattern that could grow more consequential as the company rolls out more capable and autonomous AI models.
With the pressure mounting, it remains to be seen when (or if) xAI will publish the revised safety framework—and whether it will include meaningful policies to match its public declarations.