Mistral quietly released two new models yesterday: Magistral Small 1.2 (Apache 2.0, 96.1 GB on Hugging Face) and Magistral Medium 1.2 (not open weights same as Mistral's other "medium" models.)
Despite being described as "minor updates" to the Magistral 1.1 models these have one very notable improvement:
- Multimodality: Now equipped with a vision encoder, these models handle both text and images seamlessly.
Magistral is Mistral's reasoning model, so we now have a new reasoning vision LLM.
The other features from the tiny announcement on Twitter:
- Performance Boost: 15% improvements on math and coding benchmarks such as AIME 24/25 and LiveCodeBench v5/v6.
- Smarter Tool Use: Better tool usage with web search, code interpreter, and image generation.
- Better Tone & Persona: Responses are clearer, more natural, and better formatted for you.
Recent articles
- Nano Banana Pro aka gemini-3-pro-image-preview is the best available image generation model - 20th November 2025
- How I automate my Substack newsletter with content from my blog - 19th November 2025
- Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark - 18th November 2025