LLaVA: Large Language and Vision Assistant (via) Yet another multi-modal model combining a vision model (pre-trained CLIP ViT-L/14) and a LLaMA derivative model (Vicuna). The results I get from their demo are even more impressive than MiniGPT-4. Also includes a new training dataset, LLaVA-Instruct-150K, derived from GPT-4 and subject to the same warnings about the OpenAI terms of service.
Recent articles
- Reverse engineering some updates to Claude - 31st July 2025
- Trying out Qwen3 Coder Flash using LM Studio and Open WebUI and LLM - 31st July 2025
- My 2.5 year old laptop can write Space Invaders in JavaScript now, using GLM-4.5 Air and MLX - 29th July 2025