OpenAI is preparing to release a new “open” language model — its first since GPT-2. The model, currently in early development, is expected to launch by early summer. Aidan Clark, OpenAI’s VP of research, is reportedly leading the effort, according to TechCrunch sources.
This upcoming model is designed for reasoning tasks and aims to outperform other open-source reasoning models. It will follow a simple “text in, text out” structure and run efficiently on high-end consumer hardware. The company may also allow developers to toggle the model’s reasoning feature on and off. This mirrors similar functionality in models released by Anthropic and others, where reasoning improves accuracy but adds latency.
OpenAI intends to make the model available under a very permissive license with minimal restrictions. Sources say the goal is to avoid the licensing pitfalls seen with other open models like Meta’s Llama and Google’s Gemma, which have drawn criticism for being too restrictive.
A Shift in Strategy
This move comes amid growing pressure from global competitors that are leaning into open access. Chinese AI lab DeepSeek, for example, has gained traction by releasing models for both experimentation and commercial use. Meta has also embraced the open model strategy with its Llama series, which had surpassed 1 billion downloads by March.
OpenAI, long criticized for its tight control over proprietary models, now appears to be rethinking its position. CEO Sam Altman has admitted that the company may have taken the wrong approach. In a Reddit Q&A earlier this year, he said OpenAI needs to “figure out a different open source strategy.” While not everyone at OpenAI agrees with him, Altman acknowledged the firm’s future models may not dominate the way GPT-3 or GPT-4 did.
The upcoming model may signal the start of a broader shift. If the launch is well received, OpenAI may release additional versions, including smaller models optimized for different use cases.
Safety, Transparency, and Trust
OpenAI says it plans to red-team the model and assess it using its standard preparedness framework. This is especially important since the model will be modifiable after release. The company also intends to publish a detailed model card, showing results from internal and external safety testing and benchmarking.
Altman reaffirmed these steps in a recent X post, stating: “[B]efore release, we will evaluate this model according [to] our preparedness framework… and we will do extra work given that we know this model will be modified post-release.”
OpenAI has previously been criticized for skipping model cards and rushing safety assessments. These concerns grew after Altman’s brief ouster in late 2023, when he was accused of withholding details about safety reviews from other executives.
Still, the company hopes this new release will reset expectations around transparency. By opening access and documenting the model’s capabilities and limitations, OpenAI seems intent on restoring some of the trust it has lost.
If successful, this release could help OpenAI reclaim ground lost to open-source competitors. It might also provide a blueprint for future models that are both powerful and broadly accessible — a balance the company has struggled to strike until now.