Simon Willison’s Weblog

Subscribe

Notes from my Accessibility and Gen AI podcast appearence

2nd March 2025

I was a guest on the most recent episode of the Accessibility + Gen AI Podcast, hosted by Eamon McErlean and Joe Devon. We had a really fun, wide-ranging conversation about a host of different topics. I’ve extracted a few choice quotes from the transcript.

LLMs for drafting alt text

I use LLMs for the first draft of my alt text (22:10):

I actually use Large Language Models for most of my alt text these days. Whenever I tweet an image or whatever, I’ve got a Claude project called Alt text writer. It’s got a prompt and an example. I dump an image in and it gives me the alt text.

I very rarely just use it because that’s rude, right? You should never dump text onto people that you haven’t reviewed yourself. But it’s always a good starting point.

Normally I’ll edit a tiny little bit. I’ll delete an unimportant detail or I’ll bulk something up. And then I’ve got alt text that works.

Often it’s actually got really good taste. A great example is if you’ve got a screenshot of an interface, there’s a lot of words in that screenshot and most of them don’t matter.

The message you’re trying to give in the alt text is that it’s two panels on the left, there’s a conversation on the right, there’s a preview of the SVG file or something. My alt text writer normally gets that right.

It’s even good at summarizing tables of data where it will notice that actually what really matters is that Gemini got a score of 57 and Nova got a score of 53—so it will pull those details out and ignore [irrelevant columns] like the release dates and so forth.

Here’s the current custom instructions prompt I’m using for that Claude Project:

You write alt text for any image pasted in by the user. Alt text is always presented in a fenced code block to make it easy to copy and paste out. It is always presented on a single line so it can be used easily in Markdown images. All text on the image (for screenshots etc) must be exactly included. A short note describing the nature of the image itself should go first.

Is it ethical to build unreliable accessibility tools?

On the ethics of building accessibility tools on top of inherently unreliable technology (5:33):

Some people I’ve talked to have been skeptical about the accessibility benefits because their argument is that if you give somebody unreliable technology that might hallucinate and make things up, surely that’s harming them.

I don’t think that’s true. I feel like people who use screen readers are used to unreliable technology.

You know, if you use a guide dog—it’s a wonderful thing and a very unreliable piece of technology.

When you consider that people with accessibility needs have agency, they can understand the limitations of the technology they’re using. I feel like giving them a tool where they can point their phone at something and it can describe it to them is a world away from accessibility technology just three or four years ago.

Why I don’t feel threatened as a software engineer

This is probably my most coherent explanation yet of why I don’t see generative AI as a threat to my career as a software engineer (33:49):

My perspective on this as a developer who’s been using these systems on a daily basis for a couple of years now is that I find that they enhance my value. I am so much more competent and capable as a developer because I’ve got these tools assisting me. I can write code in dozens of new programming languages that I never learned before.

But I still get to benefit from my 20 years of experience.

Take somebody off the street who’s never written any code before and ask them to build an iPhone app with ChatGPT. They are going to run into so many pitfalls, because programming isn’t just about can you write code—it’s about thinking through the problems, understanding what’s possible and what’s not, understanding how to QA, what good code is, having good taste.

There’s so much depth to what we do as software engineers.

I’ve said before that generative AI probably gives me like two to five times productivity boost on the part of my job that involves typing code into a laptop. But that’s only 10 percent of what I do. As a software engineer, most of my time isn’t actually spent with the typing of the code. It’s all of those other activities.

The AI systems help with those other activities, too. They can help me think through architectural decisions and research library options and so on. But I still have to have that agency to understand what I’m doing.

So as a software engineer, I don’t feel threatened. My most optimistic view of this is that the cost of developing software goes down because an engineer like myself can be more ambitious, can take on more things. As a result, demand for software goes up—because if you’re a company that previously would never have dreamed of building a custom CRM for your industry because it would have taken 20 engineers a year before you got any results... If it now takes four engineers three months to get results, maybe you’re in the market for software engineers now that you weren’t before.