Qwen 2.5 72B by Qwen demonstrates competitive pricing. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
General Benchmarks
Coding Benchmarks
Reasoning Benchmarks
Speed Benchmarks
Cost Benchmarks
Context Benchmarks
Qwen 2.5 72B — Benchmark Scores Overview
Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).
Compare Qwen 2.5 72B With
Qwen 2.5 72B — Frequently Asked Questions
How intelligent is Qwen 2.5 72B?
Qwen 2.5 72B scores 1230 on the Chatbot Arena ELO rating, making it an entry-level AI model. This score is based on blind head-to-head human preference voting.
How much does Qwen 2.5 72B cost?
Qwen 2.5 72B costs $0.12 per 1M input tokens and $0.39 per 1M output tokens. This makes it one of the more affordable models.
How fast is Qwen 2.5 72B?
Qwen 2.5 72B generates output at 80 tokens per second, which is moderate compared to other models. The time to first token is 350 ms.
How good is Qwen 2.5 72B at coding?
Qwen 2.5 72B achieves 28.0% on SWE-bench Verified, demonstrating basic real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.
How good is Qwen 2.5 72B at math and reasoning?
Qwen 2.5 72B scores 83.1% on the MATH benchmark (competition-level mathematics). It also achieves 49.1% on GPQA Diamond, a graduate-level science reasoning benchmark.
What is the context window of Qwen 2.5 72B?
Qwen 2.5 72B has a context window of 131K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created Qwen 2.5 72B?
Qwen 2.5 72B was created by Qwen. It is classified as a open source model in the AI Value Index.
Is Qwen 2.5 72B open source?
Yes, Qwen 2.5 72B is an open-source model. The model weights are publicly available for download and self-hosting.