ApX 标志

趋近智

Mistral-Small-2501

参数

24B

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

13 Jan 2025

知识截止

Oct 2023

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

32768

层数

40

注意力头

24

键值头

6

激活函数

SwigLU

归一化

-

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Mistral-Small-2501

Mistral Small 3, specifically the Mistral-Small-2501 variant, is a 24-billion-parameter language model developed by Mistral AI, engineered for optimal efficiency and low-latency performance in generative AI tasks. This model is delivered as both a pre-trained base model and an instruction-tuned checkpoint, making it suitable for a range of language-centric applications. Its release under the Apache 2.0 license underscores its commitment to an open ecosystem, enabling widespread adoption and modification.

The architectural foundation of Mistral-Small-2501 is a dense transformer network, distinguished by a design that incorporates fewer layers compared to larger models, thereby minimizing time per forward pass. The model utilizes Grouped-Query Attention (GQA) to enhance inference efficiency and integrates Rotary Position Embeddings (RoPE) for effective positional encoding. The SwiGLU activation function is employed within its layers. With a substantial context window of 32,768 tokens, the model is capable of processing and generating extended sequences of text. It supports multiple languages, reinforcing its applicability in diverse global contexts.

Mistral Small 3 (Mistral-Small-2501) is designed for practical deployment, emphasizing rapid response times. It exhibits performance characteristics that position it as a proficient solution for scenarios demanding quick and accurate language processing, such as conversational agents, automated function calling, and specialized domain-specific applications through fine-tuning. Its efficient architecture allows for deployment on various computational platforms, including consumer-grade hardware, making it suitable for localized inference and applications with strict latency requirements.

关于 Mistral Small 3

Mistral Small 3, a 24 billion parameter model, was designed for efficient, low-latency generative AI tasks. Its optimized architecture supports local deployment and includes multimodal understanding, multilingual capabilities, and a 128,000-token context window.


其他 Mistral Small 3 模型
  • 没有相关模型

评估基准

排名适用于本地LLM。

排名

#37

基准分数排名

0.75

6

0.35

8

0.91

9

0.81

11

Agentic Coding

LiveBench Agentic

0.08

12

0.38

12

General Knowledge

MMLU

0.68

13

0.38

15

Professional Knowledge

MMLU Pro

0.66

16

0.50

17

0.37

18

0.52

18

Graduate-Level QA

GPQA

0.45

20

0.38

25

排名

排名

#37

编程排名

#34

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU