- Published on
Mistral has introduced Mixtral 8x22B, a sparse Mixture-of-Experts (SMoE) model that offers improved performance and efficiency.
The model uses 39 billion active parameters out of 141 billion, making it a cost-efficient option.
It is fluent in five languages: English, French, Italian, German, and Spanish. Additionally, it possesses strong mathematics and coding abilities, native function calling ability, and a 64K tokens context window.
Mixtral 8x22B is being released under the Apache 2.0 license.