2nd January 2023
Petals (via) The challenge with large language models in the same scale ballpark as GPT-3 is that they’re large—really large. Far too big to run on a single machine at home. Petals is a fascinating attempt to address that problem: it works a little bit like BitTorrent, in that each user of Petal runs a subset of the overall language model on their machine and participates in a larger network to run inference across potentially hundreds of distributed GPUs. I tried it just now in Google Colab and it worked exactly as advertised, after downloading an 8GB subset of the 352GB BLOOM-176B model.
Recent articles
- Adding TILs, releases, museums, tools and research to my blog - 20th February 2026
- Two new Showboat tools: Chartroom and datasette-showboat - 17th February 2026
- Deep Blue - 15th February 2026