Mistral quietly released two new models yesterday: Magistral Small 1.2 (Apache 2.0, 96.1 GB on Hugging Face) and Magistral Medium 1.2 (not open weights same as Mistral's other "medium" models.)
Despite being described as "minor updates" to the Magistral 1.1 models these have one very notable improvement:
- Multimodality: Now equipped with a vision encoder, these models handle both text and images seamlessly.
Magistral is Mistral's reasoning model, so we now have a new reasoning vision LLM.
The other features from the tiny announcement on Twitter:
- Performance Boost: 15% improvements on math and coding benchmarks such as AIME 24/25 and LiveCodeBench v5/v6.
- Smarter Tool Use: Better tool usage with web search, code interpreter, and image generation.
- Better Tone & Persona: Responses are clearer, more natural, and better formatted for you.
Recent articles
- First impressions of Claude Cowork, Anthropic's general agent - 12th January 2026
- My answers to the questions I posed about porting open source code with LLMs - 11th January 2026
- Fly's new Sprites.dev addresses both developer sandboxes and API sandboxes at the same time - 9th January 2026