Mistral Small 3.2 by Mistral demonstrates fast output speed, competitive pricing. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
General Benchmarks
Coding Benchmarks
Reasoning Benchmarks
Speed Benchmarks
Cost Benchmarks
Context Benchmarks
Mistral Small 3.2 — Benchmark Scores Overview
Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).
Compare Mistral Small 3.2 With
Mistral Small 3.2 — Frequently Asked Questions
How intelligent is Mistral Small 3.2?
Mistral Small 3.2 scores 1190 on the Chatbot Arena ELO rating, making it an entry-level AI model. This score is based on blind head-to-head human preference voting.
How much does Mistral Small 3.2 cost?
Mistral Small 3.2 costs $0.10 per 1M input tokens and $0.30 per 1M output tokens. This makes it one of the more affordable models.
How fast is Mistral Small 3.2?
Mistral Small 3.2 generates output at 160 tokens per second, which is very fast compared to other models. The time to first token is 120 ms.
How good is Mistral Small 3.2 at coding?
Mistral Small 3.2 achieves 20.0% on SWE-bench Verified, demonstrating basic real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.
How good is Mistral Small 3.2 at math and reasoning?
Mistral Small 3.2 scores 58.0% on the MATH benchmark (competition-level mathematics). It also achieves 30.0% on GPQA Diamond, a graduate-level science reasoning benchmark.
What is the context window of Mistral Small 3.2?
Mistral Small 3.2 has a context window of 131K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created Mistral Small 3.2?
Mistral Small 3.2 was created by Mistral. It is classified as a budget model in the AI Value Index.
Is Mistral Small 3.2 open source?
No, Mistral Small 3.2 is a proprietary model. It is available through Mistral's API and compatible providers.