Claude Opus 4.5 vs Claude Opus 4.6

Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.

Claude Opus 4.6 by Anthropic wins on 13 of 18 benchmarks against Claude Opus 4.5 by Anthropic, which leads on 2. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.

Category-by-Category Breakdown

General Intelligence: In general intelligence, Claude Opus 4.6 scores 1496 on Chatbot Arena ELO compared to Claude Opus 4.5's 1468, while Claude Opus 4.5 scores 89.5% on MMLU-Pro compared to Claude Opus 4.6's 82.0%, while Claude Opus 4.6 scores 91.0% on IFEval compared to Claude Opus 4.5's 90.0%.

Coding: In coding, Claude Opus 4.6 scores 93.5% on HumanEval+ compared to Claude Opus 4.5's 92.0%, while Claude Opus 4.5 scores 80.9% on SWE-bench Verified compared to Claude Opus 4.6's 80.8%, while Claude Opus 4.6 scores 72.0% on LiveCodeBench compared to Claude Opus 4.5's 70.0%, while Claude Opus 4.6 scores 75.0% on Aider Polyglot compared to Claude Opus 4.5's 74.0%, while Claude Opus 4.6 scores 70.4% on BFCL compared to Claude Opus 4.5's 70.0%.

Math: In math, Claude Opus 4.6 scores 92.0% on MATH compared to Claude Opus 4.5's 90.0%, while Claude Opus 4.6 scores 97.0% on GSM8K compared to Claude Opus 4.5's 96.5%, while Claude Opus 4.6 scores 100.0% on AIME 2025 compared to Claude Opus 4.5's 72.0%.

Reasoning: In reasoning, Claude Opus 4.6 scores 91.3% on GPQA Diamond compared to Claude Opus 4.5's 87.0%, while Claude Opus 4.6 scores 60.0% on ARC-AGI compared to Claude Opus 4.5's 57.0%.

Context: In context, both score 200K on Context Length.

Pricing Comparison

Claude Opus 4.5 costs $5.0/1M input tokens and $25.0/1M output tokens, while Claude Opus 4.6 costs $5.0/1M input and $25.0/1M output. Both models are priced identically for input tokens.

Speed Comparison

Claude Opus 4.5 generates output at 50 tok/s compared to Claude Opus 4.6's 68 tok/s, and the time to first token is 1700 ms for Claude Opus 4.5 versus 1680 ms for Claude Opus 4.6. Claude Opus 4.6 delivers faster throughput.

Verdict

For developers prioritizing coding, Claude Opus 4.5 has the edge. For those who value general intelligence and math and speed, Claude Opus 4.6 is the stronger choice.

Claude Opus 4.5 vs Claude Opus 4.6 — FAQ

Which is better, Claude Opus 4.5 or Claude Opus 4.6?

Claude Opus 4.6 wins on more benchmarks overall (13 vs 2). However, the best choice depends on your specific needs — each model excels in different areas.

How does Claude Opus 4.5 compare to Claude Opus 4.6 for coding?

Claude Opus 4.5 is better for coding, scoring 80.9% on SWE-bench Verified compared to 80.8%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.

Is Claude Opus 4.5 cheaper than Claude Opus 4.6?

Both models are priced the same at $5.0/1M input tokens. Claude Opus 4.5 output costs $25.0/1M and Claude Opus 4.6 output costs $25.0/1M.

Which is faster, Claude Opus 4.5 or Claude Opus 4.6?

Claude Opus 4.6 is faster, generating output at 68 tok/s compared to 50 tok/s. Faster output speed means shorter wait times for API responses.

What benchmarks does the Claude Opus 4.5 vs Claude Opus 4.6 comparison cover?

This comparison covers 18 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, IFEval, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.