Parameters
7.3B
Context Length
8.192K
Modality
Text
Architecture
Dense
License
Apache 2.0
Release Date
27 Sept 2023
Knowledge Cutoff
Aug 2021
Attention Structure
Grouped-Query Attention
Hidden Dimension Size
4096
Number of Layers
32
Attention Heads
32
Key-Value Heads
8
Activation Function
SwigLU
Normalization
RMS Normalization
Position Embedding
ROPE
VRAM requirements for different quantization methods and context sizes
Mistral-7B-v0.1 is a 7.3 billion parameter large language model developed by Mistral AI, engineered for superior performance and computational efficiency in natural language processing tasks. Its design prioritizes efficient inference, making it suitable for practical deployment across various applications. The model is built upon a decoder-only transformer architecture, integrating several key innovations to optimize its operation.
Mistral 7B, a 7.3 billion parameter model, utilizes a decoder-only transformer architecture. It features Sliding Window Attention and Grouped Query Attention for efficient long sequence processing. A Rolling Buffer Cache optimizes memory use, contributing to its design for efficient language processing.
Ranking is for Local LLMs.
No evaluation benchmarks for Mistral-7B-v0.1 available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens