Simon Willison’s Weblog

Subscribe

Prompts.js

7th December 2024

I’ve been putting the new o1 model from OpenAI through its paces, in particular for code. I’m very impressed—it feels like it’s giving me a similar code quality to Claude 3.5 Sonnet, at least for Python and JavaScript and Bash... but it’s returning output noticeably faster.

I decided to try building a library I’ve had in mind for a while—an await ... based alternative implementation of the browser’s built-in alert(), confirm() and prompt() functions.

Short version: it lets you do this:

await Prompts.alert(
    "This is an alert message!"
);

const confirmedBoolean = await Prompts.confirm(
    "Are you sure you want to proceed?"
);

const nameString = await Prompts.prompt(
    "Please enter your name"
);

Here’s the source code and a a live demo where you can try it out:

Animated demo of Prompts.js - three buttons, one for show alert, one for show confirm and one for show prompt. The alert one shows an alert message, the confirm one askes if you want to proceed with OK and Cancel buttons that return true or false, the prompt one asks for your name and returns that as a string or null if you cancel it.

I think there’s something really interesting about using await in this way.

In the past every time I’ve used it in Python or JavaScript I’ve had an expectation that the thing I’m awaiting is going to return as quickly as possible—that I’m really just using this as a performance hack to unblock the event loop and allow it to do something else while I’m waiting for an operation to complete.

That’s not actually necessary at all! There’s no reason not to use await for operations that could take a long time to complete, such as a user interacting with a modal dialog.

Having LLMs around to help prototype this kind of library idea is really fun. This is another example of something I probably wouldn’t have bothered exploring without a model to do most of the code writing work for me.

I didn’t quite get it with a single prompt, but after a little bit of back-and-forth with o1 I got what I wanted—the main thing missing at first was sensible keyboard support (in particular the Enter and Escape keys).

My opening prompt was the following:

Write me a JavaScript library - no extra dependencies - which gives me the following functions:

await Prompts.alert("hi there"); -> displays a modal with a message and waits for you to click OK on it
await Prompts.confirm("Are you sure") -> an OK and cancel option, returns true or false<br>
await Prompts.prompt("What is your name?") -> a form asking the user's name, an OK button and cancel - if cancel returns null otherwise returns a string

These are equivalent to the browser builtin alert() and confirm() and prompt() - but I want them to work as async functions and to implement their own thing where they dull out the screen and show as a nicely styled modal

All CSS should be set by the Javascript, trying to avoid risk of existing CSS interfering with it

Here’s the full shared ChatGPT/o1 transcript.

I then got Google’s new gemini-exp-1206 model to write the first draft of the README, this time via my LLM tool:

cat index.js | llm -m gemini-exp-1206 -s \
  'write a readme for this suitable for display on npm'

Here’s the response. I ended up editing this quite a bit.

I published the result to npm as prompts-js, partly to exercise those muscles again—this is only the second package I’ve ever published there (the first was a Web Component).

This means it’s available via CDNs such as jsDelivr—so you can load it into a page and start using it like this:

<script
  src="https://cdn.jsdelivr.net/npm/prompts-js"
></script>

I haven’t yet figured out how to get it working as an ES module—there’s an open issue for that here.

Update: 0.0.3 switches to dialog.showModal()

I got some excellent feedback on Mastodon and on Twitter suggesting that I improve its accessibility by switching to using the built-in browser dialog.showModal().

This was a great idea! I ran a couple of rounds more with o1 and then switched to Claude 3.5 Sonnet for one last bug fix. Here’s a PR where I reviewed those changes.

I shipped that as release 0.0.3, which is now powering the demo.

I also hit this message, so I guess I won’t be using o1 as often as I had hoped!

You have 5 responses from 01 remaining. If you hit the limit, responses will switch to another model until it resets December 10, 2024.

Upgrading to unlimited o1 currently costs $200/month with the new ChatGPT Pro.

Things I learned from this project

Outsourcing code like this to an LLM is a great way to get something done quickly, and for me often means the difference between doing a project versus not bothering at all.

Paying attention to what the model is writing—and then iterating on it, spotting bugs and generally trying to knock it into shape—is also a great way to learn new tricks.

Here are some of the things I’ve learned from working on Prompts.js so far:

  • The const name = await askUserSomething() pattern really does work, and it feels great. I love the idea of being able to await a potentially lengthy user interaction like this.
  • HTML <dialog> elements are usable across multiple browsers now.
  • Using a <dialog> means you can skip implementing an overlay that dims out the rest of the screen yourself—that will happen automatically.
  • A <dialog> also does the right thing with respect to accessibility and preventing keyboard access to other elements on the page while that dialog is open.
  • If you set <form method="dialog"> in a form inside a dialog, submitting that form will close the dialog automatically.
  • The dialog.returnValue will be set to the value of the button used to submit the form.
  • I also learned how to create a no-dependency, no build-step single file NPM package and how to ship that to NPM automatically using GitHub Actions and GitHub Releases. I wrote that up in this TIL: Publishing a simple client-side JavaScript package to npm with GitHub Actions.