Simon Willison’s Weblog

Subscribe
Atom feed for rich

5 posts tagged “rich”

Rich is a Python library for building terminal applications.

2025

Rich Pixels. Neat Python library by Darren Burns adding pixel image support to the Rich terminal library, using tricks to render an image using full or half-height colored blocks.

Here's the key trick - it renders Unicode ▄ (U+2584, "lower half block") characters after setting a foreground and background color for the two pixels it needs to display.

I got GPT-5 to vibe code up a show_image.py terminal command which resizes the provided image to fit the width and height of the current terminal and displays it using Rich Pixels. That script is here, you can run it with uv like this:

uv run https://tools.simonwillison.net/python/show_image.py \
  image.jpg

Here's what I got when I ran it against my V&A East Storehouse photo from this post:

Terminal window. I ran that command and it spat out quite a pleasing and recognizable pixel art version of the photograph.

# 2nd September 2025, 11:05 am / ascii-art, cli, python, ai, generative-ai, llms, uv, vibe-coding, gpt-5, rich

2024

Building Python tools with a one-shot prompt using uv run and Claude Projects

Visit Building Python tools with a one-shot prompt using uv run and Claude Projects

I’ve written a lot about how I’ve been using Claude to build one-shot HTML+JavaScript applications via Claude Artifacts. I recently started using a similar pattern to create one-shot Python utilities, using a custom Claude Project combined with the dependency management capabilities of uv.

[... 899 words]

Anatomy of a Textual User Interface. Will McGugan used Textual and my LLM Python library to build a delightful TUI for talking to a simulation of Mother, the AI from the Aliens movies:

Animated screenshot of a terminal app called MotherApp. Mother: INTERFACE 2037 READY FOR INQUIRY. I type: Who is onboard? Mother replies, streaming content to the screen:  The crew of the Nostromo consists of the following personnel: 1. Captain Arthur Dallas - commanding officer. 2. Executive Officer Thomas Kane - second-in-command. 3. Warrant Officer Ellen Ripley - third-in-command. 4. Navigator Joan Lambert - responsible for navigation and communications. 5. Science Officer Ash - responsible for scientific analysis. 6. Engineering Technician Brett - maintenance and repair. 7. Chief Engineer Parker - head of the engineering department. All crew members are currently accounted for. How may I assist you further?

The entire implementation is just 77 lines of code. It includes PEP 723 inline dependency information:

# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "llm",
#     "textual",
# ]
# ///

Which means you can run it in a dedicated environment with the correct dependencies installed using uv run like this:

wget 'https://gist.githubusercontent.com/willmcgugan/648a537c9d47dafa59cb8ece281d8c2c/raw/7aa575c389b31eb041ae7a909f2349a96ffe2a48/mother.py'
export OPENAI_API_KEY='sk-...'
uv run mother.py

I found the send_prompt() method particularly interesting. Textual uses asyncio for its event loop, but LLM currently only supports synchronous execution and can block for several seconds while retrieving a prompt.

Will used the Textual @work(thread=True) decorator, documented here, to run that operation in a thread:

@work(thread=True)
def send_prompt(self, prompt: str, response: Response) -> None:
    response_content = ""
    llm_response = self.model.prompt(prompt, system=SYSTEM)
    for chunk in llm_response:
        response_content += chunk
        self.call_from_thread(response.update, response_content)

Looping through the response like that and calling self.call_from_thread(response.update, response_content) with an accumulated string is all it takes to implement streaming responses in the Textual UI, and that Response object sublasses textual.widgets.Markdown so any Markdown is rendered using Rich.

# 2nd September 2024, 4:39 pm / python, will-mcgugan, textual, llm, uv, rich

uvtrick (via) This "fun party trick" by Vincent D. Warmerdam is absolutely brilliant and a little horrifying. The following code:

from uvtrick import Env

def uses_rich():
    from rich import print
    print("hi :vampire:")

Env("rich", python="3.12").run(uses_rich)

Executes that uses_rich() function in a fresh virtual environment managed by uv, running the specified Python version (3.12) and ensuring the rich package is available - even if it's not installed in the current environment.

It's taking advantage of the fact that uv is so fast that the overhead of getting this to work is low enough for it to be worth at least playing with the idea.

The real magic is in how uvtrick works. It's only 127 lines of code with some truly devious trickery going on.

That Env.run() method:

  • Creates a temporary directory
  • Pickles the args and kwargs and saves them to pickled_inputs.pickle
  • Uses inspect.getsource() to retrieve the source code of the function passed to run()
  • Writes that to a pytemp.py file, along with a generated if __name__ == "__main__": block that calls the function with the pickled inputs and saves its output to another pickle file called tmp.pickle

Having created the temporary Python file it executes the program using a command something like this:

uv run --with rich --python 3.12 --quiet pytemp.py

It reads the output from tmp.pickle and returns it to the caller!

# 1st September 2024, 5:03 am / python, uv, vincent-d-warmerdam, rich

2020

How to get Rich with Python (a terminal rendering library). Will McGugan introduces Rich, his new Python library for rendering content on the terminal. This is a very cool piece of software—out of the box it supports coloured text, emoji, tables, rendering Markdown, syntax highlighting code, rendering Python tracebacks, progress bars and more. “pip install rich” and then “python -m rich” to render a “test card” demo demonstrating the features of the library.

# 4th May 2020, 11:27 pm / cli, python, will-mcgugan, rich