FlexGen (via) This looks like a very big deal. FlexGen is a paper and accompanying code that massively reduces the resources needed to run some of the current top performing open source GPT-style large language models. People on Hacker News report being able to use it to run models like opt-30b on their own hardware, and it looks like it opens up the possibility of running even larger models on hardware available outside of dedicated research labs.
Recent articles
- NVIDIA DGX Spark: great hardware, early days for the ecosystem - 14th October 2025
- Claude can write complete Datasette plugins now - 8th October 2025
- Vibe engineering - 7th October 2025