/Models/Inception: Mercury
M

Inception: Mercury

128k context
Lowest Price
$0.25
per 1M tokens
Providers
1
Available
Context
128k
tokens

Price Comparison

ProviderInput / OutputLatencyStatus
OpenRouterOpenRouterLowest
$0.25/$0.75
...
Verified

About This Model

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide respons...

Quick Start