We’re planning to release a very capable open language model in the coming months, our first since GPT-2. [...]
As models improve, there is more and more demand to run them everywhere. Through conversations with startups and developers, it became clear how important it was to be able to support a spectrum of needs, such as custom fine-tuning for specialized tasks, more tunable latency, running on-prem, or deployments requiring full data control.
— Brad Lightcap, COO, OpenAI
Recent articles
- Introducing Showboat and Rodney, so agents can demo what they’ve built - 10th February 2026
- How StrongDM's AI team build serious software without even looking at the code - 7th February 2026
- Running Pydantic's Monty Rust sandboxed Python subset in WebAssembly - 6th February 2026