Claude Opus 4.6 vs Qwen 3.5 397B

Side-by-side benchmark comparison across coding, math, reasoning, speed, and pricing.

Claude Opus 4.6 by Anthropic wins on 11 of 14 benchmarks against Qwen 3.5 397B by Qwen, which leads on 3. This head-to-head comparison covers coding, math, reasoning, speed, and pricing metrics from the AI Value Index.

Category-by-Category Breakdown

General Intelligence: In general intelligence, Claude Opus 4.6 scores 1496 on Chatbot Arena ELO compared to Qwen 3.5 397B's 1350, while Claude Opus 4.6 scores 82.0% on MMLU-Pro compared to Qwen 3.5 397B's 81.0%.

Coding: In coding, Claude Opus 4.6 scores 93.5% on HumanEval+ compared to Qwen 3.5 397B's 88.0%, while Claude Opus 4.6 scores 80.8% on SWE-bench Verified compared to Qwen 3.5 397B's 76.4%, while Claude Opus 4.6 scores 72.0% on LiveCodeBench compared to Qwen 3.5 397B's 54.0%.

Math: In math, Claude Opus 4.6 scores 92.0% on MATH compared to Qwen 3.5 397B's 87.0%, while Claude Opus 4.6 scores 97.0% on GSM8K compared to Qwen 3.5 397B's 95.0%.

Reasoning: In reasoning, Claude Opus 4.6 scores 91.3% on GPQA Diamond compared to Qwen 3.5 397B's 62.0%, while Claude Opus 4.6 scores 60.0% on ARC-AGI compared to Qwen 3.5 397B's 45.0%.

Context: In context, Claude Opus 4.6 scores 200K on Context Length compared to Qwen 3.5 397B's 131K.

Pricing Comparison

Claude Opus 4.6 costs $5.0/1M input tokens and $25.0/1M output tokens, while Qwen 3.5 397B costs $0.60/1M input and $3.6/1M output. Qwen 3.5 397B is the more affordable option for API usage.

Speed Comparison

Claude Opus 4.6 generates output at 68 tok/s compared to Qwen 3.5 397B's 45 tok/s, and the time to first token is 1680 ms for Claude Opus 4.6 versus 600 ms for Qwen 3.5 397B. Claude Opus 4.6 delivers faster throughput.

Verdict

For developers prioritizing coding and general intelligence and math and speed, Claude Opus 4.6 has the edge. For those who value affordability, Qwen 3.5 397B is the stronger choice.

Claude Opus 4.6 vs Qwen 3.5 397B — FAQ

Which is better, Claude Opus 4.6 or Qwen 3.5 397B?

Claude Opus 4.6 wins on more benchmarks overall (11 vs 3). However, the best choice depends on your specific needs — each model excels in different areas.

How does Claude Opus 4.6 compare to Qwen 3.5 397B for coding?

Claude Opus 4.6 is better for coding, scoring 80.8% on SWE-bench Verified compared to 76.4%. SWE-bench tests real-world software engineering by resolving actual GitHub issues.

Is Claude Opus 4.6 cheaper than Qwen 3.5 397B?

Yes, Qwen 3.5 397B is cheaper. Claude Opus 4.6 costs $5.0/1M input and $25.0/1M output tokens. Qwen 3.5 397B costs $0.60/1M input and $3.6/1M output tokens.

Which is faster, Claude Opus 4.6 or Qwen 3.5 397B?

Claude Opus 4.6 is faster, generating output at 68 tok/s compared to 45 tok/s. Faster output speed means shorter wait times for API responses.

What benchmarks does the Claude Opus 4.6 vs Qwen 3.5 397B comparison cover?

This comparison covers 14 benchmarks including Chatbot Arena ELO, MMLU-Pro, HumanEval+, MATH, SWE-bench Verified, GPQA Diamond, Output Speed, Time to First Token, and more. Metrics span general intelligence, coding, math, reasoning, speed, and cost categories.