Claude Haiku 4.5 by Anthropic demonstrates fast output speed. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
General Benchmarks
Coding Benchmarks
Reasoning Benchmarks
Speed Benchmarks
Cost Benchmarks
Context Benchmarks
Claude Haiku 4.5 — Benchmark Scores Overview
Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).
Compare Claude Haiku 4.5 With
Claude Haiku 4.5 — Frequently Asked Questions
How intelligent is Claude Haiku 4.5?
Claude Haiku 4.5 scores 1220 on the Chatbot Arena ELO rating, making it an entry-level AI model. This score is based on blind head-to-head human preference voting.
How much does Claude Haiku 4.5 cost?
Claude Haiku 4.5 costs $1.0 per 1M input tokens and $5.0 per 1M output tokens. This is mid-range pricing for its capability level.
How fast is Claude Haiku 4.5?
Claude Haiku 4.5 generates output at 180 tokens per second, which is very fast compared to other models. The time to first token is 120 ms.
How good is Claude Haiku 4.5 at coding?
Claude Haiku 4.5 achieves 30.0% on SWE-bench Verified, demonstrating moderate real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.
How good is Claude Haiku 4.5 at math and reasoning?
Claude Haiku 4.5 scores 70.0% on the MATH benchmark (competition-level mathematics). It also achieves 40.0% on GPQA Diamond, a graduate-level science reasoning benchmark.
What is the context window of Claude Haiku 4.5?
Claude Haiku 4.5 has a context window of 200K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created Claude Haiku 4.5?
Claude Haiku 4.5 was created by Anthropic. It is classified as a budget model in the AI Value Index.
Is Claude Haiku 4.5 open source?
No, Claude Haiku 4.5 is a proprietary model. It is available through Anthropic's API and compatible providers.