Claude Sonnet 4.6 vs GPT-5
Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.
GPT-5 by OpenAI wins on 13 of 16 benchmarks against Claude Sonnet 4.6 by Anthropic, which leads on 3. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.
Category-by-Category Breakdown
General Intelligence: In general intelligence, Claude Sonnet 4.6 scores 1395 on Chatbot Arena ELO compared to GPT-5's 1390, while GPT-5 scores 86.0% on MMLU-Pro compared to Claude Sonnet 4.6's 84.0%, while GPT-5 scores 90.0% on IFEval compared to Claude Sonnet 4.6's 89.0%.
Coding: In coding, GPT-5 scores 92.0% on HumanEval+ compared to Claude Sonnet 4.6's 91.0%, while Claude Sonnet 4.6 scores 79.6% on SWE-bench Verified compared to GPT-5's 74.9%, while GPT-5 scores 78.0% on LiveCodeBench compared to Claude Sonnet 4.6's 65.0%, while Claude Sonnet 4.6 scores 70.3% on BFCL compared to GPT-5's 59.2%.
Math: In math, GPT-5 scores 94.6% on MATH compared to Claude Sonnet 4.6's 88.0%, while GPT-5 scores 98.0% on GSM8K compared to Claude Sonnet 4.6's 96.0%.
Reasoning: In reasoning, GPT-5 scores 87.3% on GPQA Diamond compared to Claude Sonnet 4.6's 74.1%, while GPT-5 scores 50.0% on ARC-AGI compared to Claude Sonnet 4.6's 48.0%.
Context: In context, GPT-5 scores 400K on Context Length compared to Claude Sonnet 4.6's 200K.
Pricing Comparison
Claude Sonnet 4.6 costs $3.0/1M input tokens and $15.0/1M output tokens, while GPT-5 costs $1.3/1M input and $10.0/1M output. GPT-5 is the more affordable option for API usage.
Speed Comparison
Claude Sonnet 4.6 generates output at 57 tok/s compared to GPT-5's 100 tok/s, and the time to first token is 790 ms for Claude Sonnet 4.6 versus 320 ms for GPT-5. GPT-5 delivers faster throughput.
Verdict
For developers prioritizing coding and general intelligence, Claude Sonnet 4.6 has the edge. For those who value math and affordability and speed, GPT-5 is the stronger choice.
View Individual Model Pages
Claude Sonnet 4.6 vs GPT-5 — FAQ
Which is better, Claude Sonnet 4.6 or GPT-5?
GPT-5 wins on more benchmarks overall (13 vs 3). However, the best choice depends on your specific needs — each model excels in different areas.
How does Claude Sonnet 4.6 compare to GPT-5 for coding?
Claude Sonnet 4.6 is better for coding, scoring 79.6% on SWE-bench Verified compared to 74.9%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.
Is Claude Sonnet 4.6 cheaper than GPT-5?
Yes, GPT-5 is cheaper. Claude Sonnet 4.6 costs $3.0/1M input and $15.0/1M output tokens. GPT-5 costs $1.3/1M input and $10.0/1M output tokens.
Which is faster, Claude Sonnet 4.6 or GPT-5?
GPT-5 is faster, generating output at 100 tok/s compared to 57 tok/s. Faster output speed means shorter wait times for API responses.
What benchmarks does the Claude Sonnet 4.6 vs GPT-5 comparison cover?
This comparison covers 16 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, IFEval, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.