OpenAI DevDay 2025 live blog
6th October 2025
I’m at OpenAI DevDay in Fort Mason, San Francisco today. As I did last year, I’m going to be live blogging the announcements from the kenote. Unlike last year, this year there’s a livestream.
09:55 There were genuinely more self-driving Waymos on the road directly outside the venue than regular cars!
10:10 Keynote starts in one minute.
10:12 Sam Altman is up. 2023: 2m weekly developers, 100m weekly ChatGPT users. 200m tokens/minute in the API. Today: 4m developers have built with OpenAI, 800m+ weekly ChatGPT users, 6b tokens/minute through the API.
10:15 Today's focus: how we're making it easier to build with AI. Four things: 1. Building apps inside ChatGPT (presumably Canvas, or an upgrade to that?) 2. Building agents 3. Writing code 4. Updates to models and APIs
10:15 In the past they’ve tried things like GPTs and adopted standards like MCP. (No mention of ChatGPT plugins at all, which was the original experiment in this space). Today: the “Apps SDK”. You get to trigger actions, render a full Ui. Built on MCP to give you control over your backend logic. They’re publishing a standard for this. They’re integrating both login *and* monetization, including their new checkout protocol.
10:16 So this looks like ChatGPT itself as an application platform. Shows a demo with a "Use spotify for this answer" button that appears after you ask it to create a playlist.
10:17 Alexi Christakis is on stage for a demo. Starts with Coursera - types "coursera can you teach me something about machine learning" and the Coursera app is suggested below, which then shows a consent dialog. "Coursera is now connected".
10:17 Apps in ChatGPT are displayed by default inline. This feels like a new evolution of the Claude Artifacts / ChatGPT Canvas pattern.
10:19 Apps SDK lets you expose context back to ChatGPT from your app - so you can keep ChatGPT updated with what your user is interacting with, to enable it to answer follow-up questions. In the Coursera demo the question "what are they talking about right now?" is answered based on what's going on in the embedded video at the point where it is playing.
10:20 "If you have an existing MCP it's really quick to enhance it with the Apps SDK" - you add a resource that returns HTML.
10:21 Next demo: asking Canva for poster designs. These show up inline, similar to how ChatGPT image generation works already.
But... apps can also use a "full screen" mode which takes up much more of the available UI.
10:22 "Based on our conversations, what would be a good city to expand the dog walking to?" - ChatGPT enthusiastically suggests Pittsburg... so Alexi asks the Zillow app for homes for sale there, which then display on an embedded map in the ChatGPT interface.
10:23 "Can you filter this to just the three bedroom homes with a yard?" - ChatGPT updates the existing Zillow map with the new filtered data.
10:24 At this point there's a full embedded version of Zillow in the ChatGPT interface. Select a property, ask "how close is this to a dog park?" and ChatGPT answers with regular text.
10:24 Meanwhile Canva in another ChatGPT window has just finished producing a pitch deck for the dog walking business.
10:25 (I'm hoping to hear more about monetization.)
10:25 These initial partner apps from Zillow and Canva (and presumably others) will be available in ChatGPT today.
10:26 There's going to be an app directory too - for any apps that "meet the standards in our developer guidelines".
10:26 OK, that was the first section - "Apps inside ChatGPT". Next up, "Building agents". Let's see which agents definition they use...
10:27 I think they went with "Systems that do things for you".
10:27 Sam notes that despite the excitement very few of their definition of agents have successfully shipped.
10:27 AgentKit is "a complete set of building bots" available in the OpenAI platform. Everything you need to "build, deploy and optimize agentic workflows".
10:29 Agent Builder: helps you build agents. Looked from the screenshot like one of those drag boxes and lines around program building tools. They have ChatKit, to help build chat interfaces. And Evals - to help you evaluate the performance of your agent.
10:29 "And of course, agents need access to data". The new "Connectors" mechanism lets you attach agents to different data sources.
10:30 This sounds to me like a lethal trifecta problem. If you sign up agents to your private data from different connectors you need to be very confident that there aren't any exfiltration holes that an attacker might use to steal your data!
10:31 Christina Huang now on stage to talk about AgentKit.
10:32 What if the DevDay website could "help you navigate the day and point you to the sessions most relevant to you"? They're going to live code and ship that in 8 minutes.
10:33 Starts with a 'new workflow" in Agent Builder. It's a drag-and-drop boxes-and-lines software building tool, similar to n8n.
10:36 I'll be honest, I got a little bit loss in the demo. My problem with this kind of programming tool is that the key details inevitably end up tucked away in dozens of little preference panels. I prefer my code in a file!
10:38 An OpenAI agent is a chat interface with a system prompt with a set of configure tools. At the end she hits "publish" and gets a URL and embeddable code.
10:38 I wonder if we'll be able to set a spending cap on these things? I'd like to ship one of these things to my website and have it automatically stop working if it costs me more than $5 in a given day.
10:40 I really hope this visual builder can be swapped out for a source code view that I can manage with Git.
10:41 I know have an "Ask Froge" link when I visit https://openai.com/devday/directory/ which brings up the new agent built during that demo.
10:42 Sam is back on stage. Next topic is helping people write software. Starts with some anecdotes of people around the world who are using ChatGPT to learn to code and build applications.
10:43 Codex runs on GPT-5-Codex now. Since early August number of messages through Codex (I assume they mean Codex CLI? It's not clear) has gone up 10x.
10:44 Sam says today Codex is out of preview and into GA. But which one! Codex CLI or Codex Cloud or both?
10:44 Today: Slack integration, enterprise analytics and a Codex SDK.
10:45 Now on stage: Romain Huet.
10:45 It's going to be a Codex live coding demo on stage.
10:48 Romain had Codex CLI build a simple ASCII art style interface for controlling a camera mounted above the stage. Codex suggested VISCA over IP to control that camera. It worked for over13 minutes on this task, so Romain brought that bit pre-prepared.
10:49 The demo works so far: the camera output is shown live on stage and the buttons to pan it work correctly.
10:50 Now Romain has it add XBox controller support. That task is running in the background now.
10:51 OpenAI's realtime API can respond to voice. Roman had Codex Cloud build an MCP server to control the lights in the keynote room (again running this in advance). Shows how it looked documentation along the way for things it needed to complete the task.
10:52 Back to the Codex tab in VS Code, which has now finished the XBox controller task. It works.
10:54 Now does the voice control for the lights work? Live demo time... The voice demo responds to "what can you see on the camera?". Then on "can you shine the lights towards the audience" it turns in the lights above the stage.
10:54 Asked to do something "fun with the lights" it didn't really do much.
10:55 Romain emphasizes he hasn't written a single manual line of code.
10:56 Cute closing demo: the voice agent has access to the Codex SDK, so Romain tells it to write code to create a move style credits animation listing all attendees of DevDay.
10:57 Sam is back. One section left: model updates.
10:57 Today launching GPT-5 Pro in the API.
10:58 And gpt-realtime-mini - 70% cheaper than the current full realtime API.
10:58 ... and Sora 2 in the API. The same model they use in the Sora app.
11:00 Here's the GPT-5 Pro model page. $15/million tokens for input, $120/million for output! Still cheaper to an o1-pro was ($150/$600).
More recent articles
- Embracing the parallel coding agent lifestyle - 5th October 2025
- Designing agentic loops - 30th September 2025