Claude Opus 4.6 vs GPT-4.1

Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.

Claude Opus 4.6 by Anthropic wins on 10 of 15 benchmarks against GPT-4.1 by OpenAI, which leads on 5. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.

Category-by-Category Breakdown

General Intelligence: In general intelligence, Claude Opus 4.6 scores 1496 on Chatbot Arena ELO compared to GPT-4.1's 1340, while Claude Opus 4.6 scores 82.0% on MMLU-Pro compared to GPT-4.1's 78.0%, while Claude Opus 4.6 scores 91.0% on IFEval compared to GPT-4.1's 88.0%.

Coding: In coding, Claude Opus 4.6 scores 93.5% on HumanEval+ compared to GPT-4.1's 89.0%, while Claude Opus 4.6 scores 80.8% on SWE-bench Verified compared to GPT-4.1's 50.0%, while Claude Opus 4.6 scores 72.0% on LiveCodeBench compared to GPT-4.1's 55.0%.

Math: In math, Claude Opus 4.6 scores 92.0% on MATH compared to GPT-4.1's 83.0%, while Claude Opus 4.6 scores 97.0% on GSM8K compared to GPT-4.1's 94.0%.

Reasoning: In reasoning, Claude Opus 4.6 scores 91.3% on GPQA Diamond compared to GPT-4.1's 58.0%, while Claude Opus 4.6 scores 60.0% on ARC-AGI compared to GPT-4.1's 40.0%.

Context: In context, GPT-4.1 scores 1.0M on Context Length compared to Claude Opus 4.6's 200K.

Pricing Comparison

Claude Opus 4.6 costs $5.0/1M input tokens and $25.0/1M output tokens, while GPT-4.1 costs $2.0/1M input and $8.0/1M output. GPT-4.1 is the more affordable option for API usage.

Speed Comparison

Claude Opus 4.6 generates output at 68 tok/s compared to GPT-4.1's 70 tok/s, and the time to first token is 1680 ms for Claude Opus 4.6 versus 450 ms for GPT-4.1. GPT-4.1 delivers faster throughput.

Verdict

For developers prioritizing coding and general intelligence and math, Claude Opus 4.6 has the edge. For those who value affordability and speed, GPT-4.1 is the stronger choice.

View Individual Model Pages

Claude Opus 4.6 vs GPT-4.1 — FAQ

Which is better, Claude Opus 4.6 or GPT-4.1?

Claude Opus 4.6 wins on more benchmarks overall (10 vs 5). However, the best choice depends on your specific needs — each model excels in different areas.

How does Claude Opus 4.6 compare to GPT-4.1 for coding?

Claude Opus 4.6 is better for coding, scoring 80.8% on SWE-bench Verified compared to 50.0%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.

Is Claude Opus 4.6 cheaper than GPT-4.1?

Yes, GPT-4.1 is cheaper. Claude Opus 4.6 costs $5.0/1M input and $25.0/1M output tokens. GPT-4.1 costs $2.0/1M input and $8.0/1M output tokens.

Which is faster, Claude Opus 4.6 or GPT-4.1?

GPT-4.1 is faster, generating output at 70 tok/s compared to 68 tok/s. Faster output speed means shorter wait times for API responses.

What benchmarks does the Claude Opus 4.6 vs GPT-4.1 comparison cover?

This comparison covers 15 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, IFEval, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.