Output Speed
339 tok/s
Context Window
131K
GPT-OSS 120B by OpenAI demonstrates fast output speed. View detailed benchmark data including scores across coding, math, reasoning, speed, and cost metrics.
Speed Benchmarks
Context Benchmarks
Compare GPT-OSS 120B With
GPT-OSS 120B — Frequently Asked Questions
How fast is GPT-OSS 120B?
GPT-OSS 120B generates output at 339 tokens per second, which is very fast compared to other models. The time to first token is 440 ms.
What is the context window of GPT-OSS 120B?
GPT-OSS 120B has a context window of 131K tokens. This determines how much text, conversation history, and code the model can process in a single request.
Who created GPT-OSS 120B?
GPT-OSS 120B was created by OpenAI. It is classified as a open source model in the AI Value Index.
Is GPT-OSS 120B open source?
Yes, GPT-OSS 120B is an open-source model. The model weights are publicly available for download and self-hosting.