3Blue1Brown: Attention in transformers, visually explained. Grant Sanderson publishes animated explainers of mathematical topics on YouTube, to over 6 million subscribers. His latest shows how the attention mechanism in transformers (the algorithm behind most LLMs) works and is by far the clearest explanation I’ve seen of the topic anywhere.
I was intrigued to find out what tool he used to produce the visualizations. It turns out Grant built his own open source Python animation library, manim, to enable his YouTube work.
Recent articles
- Fly's new Sprites.dev addresses both developer sandboxes and API sandboxes at the same time - 9th January 2026
- LLM predictions for 2026, shared with Oxide and Friends - 8th January 2026
- Introducing gisthost.github.io - 1st January 2026