ApX logo

Mistral-7B-v0.1

Parameters

7.3B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

27 Sept 2023

Knowledge Cutoff

Aug 2021

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

4096

Number of Layers

32

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Mistral-7B-v0.1

Mistral-7B-v0.1 is a 7.3 billion parameter large language model developed by Mistral AI, engineered for superior performance and computational efficiency in natural language processing tasks. Its design prioritizes efficient inference, making it suitable for practical deployment across various applications. The model is built upon a decoder-only transformer architecture, integrating several key innovations to optimize its operation.

About Mistral 7B

Mistral 7B, a 7.3 billion parameter model, utilizes a decoder-only transformer architecture. It features Sliding Window Attention and Grouped Query Attention for efficient long sequence processing. A Rolling Buffer Cache optimizes memory use, contributing to its design for efficient language processing.


Other Mistral 7B Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Mistral-7B-v0.1 available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs

Mistral-7B-v0.1: Specifications and GPU VRAM Requirements