OpenAI

GPT-4.1 Mini — Benchmark Scores, Pricing & Performance Analysis

BUDGETOpenAI
Chatbot Arena ELO
1250
Output Speed
160 tok/s
Input Cost
$0.40/1M
Output Cost
$1.6/1M
Context Window
1.0M

GPT-4.1 Mini by OpenAI demonstrates fast output speed, competitive pricing. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.

GPT-4.1 Mini — Benchmark Scores Overview

Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).

GPT-4.1 Mini — Frequently Asked Questions

How intelligent is GPT-4.1 Mini?

GPT-4.1 Mini scores 1250 on the Chatbot Arena ELO rating, making it a mid-tier AI model. This score is based on blind head-to-head human preference voting.

How much does GPT-4.1 Mini cost?

GPT-4.1 Mini costs $0.40 per 1M input tokens and $1.6 per 1M output tokens. This makes it one of the more affordable models.

How fast is GPT-4.1 Mini?

GPT-4.1 Mini generates output at 160 tokens per second, which is very fast compared to other models. The time to first token is 130 ms.

How good is GPT-4.1 Mini at coding?

GPT-4.1 Mini achieves 35.0% on SWE-bench Verified, demonstrating moderate real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.

How good is GPT-4.1 Mini at math and reasoning?

GPT-4.1 Mini scores 72.0% on the MATH benchmark (competition-level mathematics). It also achieves 42.0% on GPQA Diamond, a graduate-level science reasoning benchmark.

What is the context window of GPT-4.1 Mini?

GPT-4.1 Mini has a context window of 1.0M tokens. This determines how much text, conversation history, and code the model can process in a single request.

Who created GPT-4.1 Mini?

GPT-4.1 Mini was created by OpenAI. It is classified as a budget model in the AI Value Index.

Is GPT-4.1 Mini open source?

No, GPT-4.1 Mini is a proprietary model. It is available through OpenAI's API and compatible providers.