Claude Sonnet 4 by Anthropic demonstrates strong general intelligence, excellent coding ability. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
General Benchmarks
Coding Benchmarks
Reasoning Benchmarks
Speed Benchmarks
Cost Benchmarks
Context Benchmarks
Claude Sonnet 4 — Benchmark Scores Overview
Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).
Compare Claude Sonnet 4 With
Claude Sonnet 4 — Frequently Asked Questions
How intelligent is Claude Sonnet 4?
Claude Sonnet 4 scores 1365 on the Chatbot Arena ELO rating, making it a high-performing AI model. This score is based on blind head-to-head human preference voting.
How much does Claude Sonnet 4 cost?
Claude Sonnet 4 costs $3.0 per 1M input tokens and $15.0 per 1M output tokens. This is mid-range pricing for its capability level.
How fast is Claude Sonnet 4?
Claude Sonnet 4 generates output at 75 tokens per second, which is slower, prioritizing quality over speed compared to other models. The time to first token is 400 ms.
How good is Claude Sonnet 4 at coding?
Claude Sonnet 4 achieves 72.7% on SWE-bench Verified, demonstrating excellent real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.
How good is Claude Sonnet 4 at math and reasoning?
Claude Sonnet 4 scores 85.0% on the MATH benchmark (competition-level mathematics). It also achieves 75.4% on GPQA Diamond, a graduate-level science reasoning benchmark.
What is the context window of Claude Sonnet 4?
Claude Sonnet 4 has a context window of 200K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created Claude Sonnet 4?
Claude Sonnet 4 was created by Anthropic. It is classified as a mid model in the AI Value Index.
Is Claude Sonnet 4 open source?
No, Claude Sonnet 4 is a proprietary model. It is available through Anthropic's API and compatible providers.