Claude Opus 4.6 vs GPT-5
Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.
GPT-5 by OpenAI wins on 9 of 17 benchmarks against Claude Opus 4.6 by Anthropic, which leads on 8. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.
Category-by-Category Breakdown
General Intelligence: In general intelligence, Claude Opus 4.6 scores 1496 on Chatbot Arena ELO compared to GPT-5's 1390, while GPT-5 scores 86.0% on MMLU-Pro compared to Claude Opus 4.6's 82.0%, while Claude Opus 4.6 scores 91.0% on IFEval compared to GPT-5's 90.0%.
Coding: In coding, Claude Opus 4.6 scores 93.5% on HumanEval+ compared to GPT-5's 92.0%, while Claude Opus 4.6 scores 80.8% on SWE-bench Verified compared to GPT-5's 74.9%, while GPT-5 scores 78.0% on LiveCodeBench compared to Claude Opus 4.6's 72.0%, while Claude Opus 4.6 scores 70.4% on BFCL compared to GPT-5's 59.2%.
Math: In math, GPT-5 scores 94.6% on MATH compared to Claude Opus 4.6's 92.0%, while GPT-5 scores 98.0% on GSM8K compared to Claude Opus 4.6's 97.0%, while Claude Opus 4.6 scores 100.0% on AIME 2025 compared to GPT-5's 94.6%.
Reasoning: In reasoning, Claude Opus 4.6 scores 91.3% on GPQA Diamond compared to GPT-5's 87.3%, while Claude Opus 4.6 scores 60.0% on ARC-AGI compared to GPT-5's 50.0%.
Context: In context, GPT-5 scores 400K on Context Length compared to Claude Opus 4.6's 200K.
Pricing Comparison
Claude Opus 4.6 costs $5.0/1M input tokens and $25.0/1M output tokens, while GPT-5 costs $1.3/1M input and $10.0/1M output. GPT-5 is the more affordable option for API usage.
Speed Comparison
Claude Opus 4.6 generates output at 68 tok/s compared to GPT-5's 100 tok/s, and the time to first token is 1680 ms for Claude Opus 4.6 versus 320 ms for GPT-5. GPT-5 delivers faster throughput.
Verdict
For developers prioritizing coding and general intelligence, Claude Opus 4.6 has the edge. For those who value math and affordability and speed, GPT-5 is the stronger choice.
View Individual Model Pages
Claude Opus 4.6 vs GPT-5 — FAQ
Which is better, Claude Opus 4.6 or GPT-5?
GPT-5 wins on more benchmarks overall (9 vs 8). However, the best choice depends on your specific needs — each model excels in different areas.
How does Claude Opus 4.6 compare to GPT-5 for coding?
Claude Opus 4.6 is better for coding, scoring 80.8% on SWE-bench Verified compared to 74.9%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.
Is Claude Opus 4.6 cheaper than GPT-5?
Yes, GPT-5 is cheaper. Claude Opus 4.6 costs $5.0/1M input and $25.0/1M output tokens. GPT-5 costs $1.3/1M input and $10.0/1M output tokens.
Which is faster, Claude Opus 4.6 or GPT-5?
GPT-5 is faster, generating output at 100 tok/s compared to 68 tok/s. Faster output speed means shorter wait times for API responses.
What benchmarks does the Claude Opus 4.6 vs GPT-5 comparison cover?
This comparison covers 17 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, IFEval, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.