OpenAI

GPT-4.1 Nano — Benchmark Scores, Pricing & Performance Analysis

BUDGETOpenAI
Chatbot Arena ELO
1120
Output Speed
200 tok/s
Input Cost
$0.10/1M
Output Cost
$0.40/1M
Context Window
1.0M

GPT-4.1 Nano by OpenAI demonstrates fast output speed, competitive pricing. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.

GPT-4.1 Nano — Benchmark Scores Overview

Scores normalized to percentage scale for visual comparison. ELO scores mapped to 0-100 range (1100-1500).

GPT-4.1 Nano — Frequently Asked Questions

How intelligent is GPT-4.1 Nano?

GPT-4.1 Nano scores 1120 on the Chatbot Arena ELO rating, making it an entry-level AI model. This score is based on blind head-to-head human preference voting.

How much does GPT-4.1 Nano cost?

GPT-4.1 Nano costs $0.10 per 1M input tokens and $0.40 per 1M output tokens. This makes it one of the more affordable models.

How fast is GPT-4.1 Nano?

GPT-4.1 Nano generates output at 200 tokens per second, which is very fast compared to other models. The time to first token is 90 ms.

How good is GPT-4.1 Nano at coding?

GPT-4.1 Nano achieves 18.0% on SWE-bench Verified, demonstrating basic real-world software engineering capability. This benchmark tests the model's ability to resolve actual GitHub issues.

How good is GPT-4.1 Nano at math and reasoning?

GPT-4.1 Nano scores 55.0% on the MATH benchmark (competition-level mathematics). It also achieves 28.0% on GPQA Diamond, a graduate-level science reasoning benchmark.

What is the context window of GPT-4.1 Nano?

GPT-4.1 Nano has a context window of 1.0M tokens. This determines how much text, conversation history, and code the model can process in a single request.

Who created GPT-4.1 Nano?

GPT-4.1 Nano was created by OpenAI. It is classified as a budget model in the AI Value Index.

Is GPT-4.1 Nano open source?

No, GPT-4.1 Nano is a proprietary model. It is available through OpenAI's API and compatible providers.