Claude 3.5 Sonnet vs GPT-5.2
Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.
GPT-5.2 by OpenAI wins on 16 of 16 benchmarks against Claude 3.5 Sonnet by Anthropic, which leads on 0. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.
Category-by-Category Breakdown
General Intelligence: In general intelligence, GPT-5.2 scores 1475 on Chatbot Arena ELO compared to Claude 3.5 Sonnet's 1268, while GPT-5.2 scores 83.0% on MMLU-Pro compared to Claude 3.5 Sonnet's 73.0%, while GPT-5.2 scores 95.0% on IFEval compared to Claude 3.5 Sonnet's 86.0%.
Coding: In coding, GPT-5.2 scores 95.0% on HumanEval+ compared to Claude 3.5 Sonnet's 81.7%, while GPT-5.2 scores 80.0% on SWE-bench Verified compared to Claude 3.5 Sonnet's 49.0%, while GPT-5.2 scores 80.0% on LiveCodeBench compared to Claude 3.5 Sonnet's 38.0%.
Math: In math, GPT-5.2 scores 97.0% on MATH compared to Claude 3.5 Sonnet's 78.3%, while GPT-5.2 scores 99.0% on GSM8K compared to Claude 3.5 Sonnet's 91.0%, while GPT-5.2 scores 100.0% on AIME 2025 compared to Claude 3.5 Sonnet's 23.0%.
Reasoning: In reasoning, GPT-5.2 scores 92.4% on GPQA Diamond compared to Claude 3.5 Sonnet's 59.4%, while GPT-5.2 scores 54.2% on ARC-AGI compared to Claude 3.5 Sonnet's 15.0%.
Context: In context, GPT-5.2 scores 400K on Context Length compared to Claude 3.5 Sonnet's 200K.
Pricing Comparison
Claude 3.5 Sonnet costs $3.0/1M input tokens and $15.0/1M output tokens, while GPT-5.2 costs $1.8/1M input and $14.0/1M output. GPT-5.2 is the more affordable option for API usage.
Speed Comparison
Claude 3.5 Sonnet generates output at 70 tok/s compared to GPT-5.2's 90 tok/s, and the time to first token is 400 ms for Claude 3.5 Sonnet versus 380 ms for GPT-5.2. GPT-5.2 delivers faster throughput.
Verdict
GPT-5.2 leads across the board in coding, general intelligence, math, affordability, speed, making it the stronger overall choice in this comparison.
View Individual Model Pages
Claude 3.5 Sonnet vs GPT-5.2 — FAQ
Which is better, Claude 3.5 Sonnet or GPT-5.2?
GPT-5.2 wins on more benchmarks overall (16 vs 0). However, the best choice depends on your specific needs — each model excels in different areas.
How does Claude 3.5 Sonnet compare to GPT-5.2 for coding?
GPT-5.2 is better for coding, scoring 80.0% on SWE-bench Verified compared to 49.0%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.
Is Claude 3.5 Sonnet cheaper than GPT-5.2?
Yes, GPT-5.2 is cheaper. Claude 3.5 Sonnet costs $3.0/1M input and $15.0/1M output tokens. GPT-5.2 costs $1.8/1M input and $14.0/1M output tokens.
Which is faster, Claude 3.5 Sonnet or GPT-5.2?
GPT-5.2 is faster, generating output at 90 tok/s compared to 70 tok/s. Faster output speed means shorter wait times for API responses.
What benchmarks does the Claude 3.5 Sonnet vs GPT-5.2 comparison cover?
This comparison covers 16 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, IFEval, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.