ApX 标志

趋近智

Mistral-7B-Instruct-v0.1

参数

7.3B

上下文长度

8.192K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

27 Sept 2023

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

4096

层数

32

注意力头

32

键值头

8

激活函数

SwigLU

归一化

RMS Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Mistral-7B-Instruct-v0.1

The Mistral-7B-Instruct-v0.1 model is an instruction-tuned variant of the Mistral-7B-v0.1 generative text model, developed by Mistral AI. Its primary purpose is to facilitate conversational AI and assistant tasks by precisely interpreting and responding to instructional prompts. This model is designed for efficiency, providing a compact yet performant solution for language processing applications.

Architecturally, Mistral-7B-Instruct-v0.1 is a decoder-only transformer model. It incorporates several advancements to enhance computational efficiency and context management. These include Grouped-Query Attention (GQA) for accelerated inference and Sliding-Window Attention (SWA), which enables processing of longer input sequences more effectively by attending to a fixed window of prior hidden states. The model utilizes Rotary Position Embedding (RoPE) for positional encoding and employs RMS Normalization. Its tokenization is handled by a Byte-fallback BPE tokenizer.

Regarding its capabilities, Mistral-7B-Instruct-v0.1 is applicable across various text-based scenarios. It is adept at generating coherent text, answering questions, and performing general natural language processing tasks. Specific applications include conversational AI systems, educational tools, customer support interfaces, and knowledge retrieval agents. Its design also supports real-time content generation and energy-efficient AI deployments due to its optimized architecture.

关于 Mistral 7B

Mistral 7B, a 7.3 billion parameter model, utilizes a decoder-only transformer architecture. It features Sliding Window Attention and Grouped Query Attention for efficient long sequence processing. A Rolling Buffer Cache optimizes memory use, contributing to its design for efficient language processing.


其他 Mistral 7B 模型

评估基准

排名适用于本地LLM。

没有可用的 Mistral-7B-Instruct-v0.1 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
4k
8k

所需显存:

推荐 GPU

Mistral-7B-Instruct-v0.1: Specifications and GPU VRAM Requirements