Codestral by Mistral demonstrates competitive pricing. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
General Benchmarks
Coding Benchmarks
Reasoning Benchmarks
Speed Benchmarks
Cost Benchmarks
Context Benchmarks
Codestral — Benchmark Scores Overview
Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).
Compare Codestral With
Codestral — Frequently Asked Questions
How intelligent is Codestral?
Codestral scores 1260 on the Chatbot Arena ELO rating, making it a mid-tier AI model. This score is based on blind head-to-head human preference voting.
How much does Codestral cost?
Codestral costs $0.30 per 1M input tokens and $0.90 per 1M output tokens. This makes it one of the more affordable models.
How fast is Codestral?
Codestral generates output at 90 tokens per second, which is moderate compared to other models. The time to first token is 280 ms.
How good is Codestral at coding?
Codestral achieves 40.0% on SWE-bench Verified, demonstrating moderate real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.
How good is Codestral at math and reasoning?
Codestral scores 72.0% on the MATH benchmark (competition-level mathematics). It also achieves 44.0% on GPQA Diamond, a graduate-level science reasoning benchmark.
What is the context window of Codestral?
Codestral has a context window of 262K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created Codestral?
Codestral was created by Mistral. It is classified as a mid model in the AI Value Index.
Is Codestral open source?
No, Codestral is a proprietary model. It is available through Mistral's API and compatible providers.