Simon Willison’s Weblog

Subscribe

Sunday, 12th March 2023

I’ve successfully run LLaMA 7B model on my 4GB RAM Raspberry Pi 4. It’s super slow about 10sec/token. But it looks we can run powerful cognitive pipelines on a cheap hardware.

Artem Andreenko # 6:22 pm