Mistral 7B

Language model

Mistral 7B is a 7.3B parameter model that uses Grouped-query attention (GQA) for faster inference and sliding Window Attention (SWA) to handle longer sequences at smaller cost.

 
Mistral 7B

Staff played by Mistral 7B