llama2-mac-gpu.sh (via) Adrien Brault provided this recipe for compiling llama.cpp on macOS with GPU support enabled (“LLAMA_METAL=1 make”) and then downloading and running a GGML build of Llama 2 13B.
Recent articles
- URL-addressable Pyodide Python environments - 13th February 2025
- Using pip to install a Large Language Model that's under 100MB - 7th February 2025
- OpenAI o3-mini, now available in LLM - 31st January 2025