Mistral 7B

Diego de las Casas , William El Sayed , Thomas Wang , Teven Le Scao , Pierre Stock , Lucile Saulnier , Lélio Renard Lavaud , Gianna Lengyel , Florian Bressand , Thibaut Lavril (Meta AI) , Devendra Singh Chaplot , Chris Bamford , Arthur Mensch , Alexandre Sablayrolles , Albert Q. Jiang , Guillaume Lample (Meta AI) , Timothée Lacroix (Meta AI) , Marie-Anne Lachaux (Meta AI)
0
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.
2023-10-10 arXiv Inference Optimization Instruction-following Model Language Model Engineering