/Models/Mixtral-8x22B-Instruct-v0.1
Mistral

Mixtral-8x22B-Instruct-v0.1

Lowest Price
$0.65
per 1M tokens
Providers
1
Available
Context
N/A
tokens

Price Comparison

ProviderInput / OutputLatencyStatus
DeepInfraDeepInfraLowest
$0.65/$0.65
...
Verified

About This Model

This is the instruction fine-tuned version of Mixtral-8x22B - the latest and largest mixture of experts large language model (LLM) from Mistral AI. This state of the art machine learning model uses a mixture 8 of experts (MoE) 22b models. During inference 2 experts are selected. This architecture al...

Quick Start