Mistral quietly released two new models yesterday: Magistral Small 1.2 (Apache 2.0, 96.1 GB on Hugging Face) and Magistral Medium 1.2 (not open weights same as Mistral's other "medium" models.)
Despite being described as "minor updates" to the Magistral 1.1 models these have one very notable improvement:
- Multimodality: Now equipped with a vision encoder, these models handle both text and images seamlessly.
Magistral is Mistral's reasoning model, so we now have a new reasoning vision LLM.
The other features from the tiny announcement on Twitter:
- Performance Boost: 15% improvements on math and coding benchmarks such as AIME 24/25 and LiveCodeBench v5/v6.
- Smarter Tool Use: Better tool usage with web search, code interpreter, and image generation.
- Better Tone & Persona: Responses are clearer, more natural, and better formatted for you.
Recent articles
- I think "agent" may finally have a widely enough agreed upon definition to be useful jargon now - 18th September 2025
- My review of Claude's new Code Interpreter, released under a very confusing name - 9th September 2025
- Recreating the Apollo AI adoption rate chart with GPT-5, Python and Pyodide - 9th September 2025