3Blue1Brown: Attention in transformers, visually explained. Grant Sanderson publishes animated explainers of mathematical topics on YouTube, to over 6 million subscribers. His latest shows how the attention mechanism in transformers (the algorithm behind most LLMs) works and is by far the clearest explanation I’ve seen of the topic anywhere.
I was intrigued to find out what tool he used to produce the visualizations. It turns out Grant built his own open source Python animation library, manim, to enable his YouTube work.
Recent articles
- Notes on OpenAI's new o1 chain-of-thought models - 12th September 2024
- Notes from my appearance on the Software Misadventures Podcast - 10th September 2024
- Teresa T is name of the whale in Pillar Point Harbor near Half Moon Bay - 8th September 2024