ApX 标志

趋近智

Ministral-8B-2410

参数

8B

上下文长度

128K

模态

Text

架构

Dense

许可证

Mistral Research License

发布日期

10 Oct 2024

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

12288

层数

36

注意力头

32

键值头

8

激活函数

-

归一化

-

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Ministral-8B-2410

The Ministral-8B-2410 is a state-of-the-art large language model developed by Mistral AI, comprising approximately 8.0 billion parameters. It is part of the "les Ministraux" model family, introduced alongside Ministral 3B, specifically optimized for local intelligence, on-device computing, and edge computing use cases. The primary objective behind this model family is to deliver compute-efficient and low-latency inference solutions for applications that operate in resource-constrained environments or require privacy-first local data processing. This model is also provided in an instruct-tuned variant, Ministral-8B-Instruct-2410.

The technical architecture of Ministral-8B-2410 is based on a dense Transformer network, featuring 36 layers with 32 attention heads and an embedding dimension of 4096, which projects to a hidden dimension of 12288. A key innovation in its design is the integration of a 128,000-token context window, facilitated by an interleaved sliding-window attention mechanism. This is complemented by Grouped Query Attention (GQA) with 8 key-value heads, enhancing inference speed and memory efficiency. The model utilizes the V3-Tekken tokenizer, supporting a vocabulary size of 131,072 tokens, optimizing its ability to process diverse linguistic inputs.

Ministral-8B-2410 demonstrates capabilities across a range of natural language processing tasks, including content generation, question answering, and code generation or assistance. It is noted for its strong performance in multilingual contexts, supporting 10 major languages, and its built-in support for function calling, enabling advanced API interactions. Its design makes it particularly suitable for practical applications such as on-device translation, internet-independent smart assistants, local data analytics, and autonomous robotics, where its low-latency and efficient processing characteristics are advantageous. The model can also function as an efficient intermediary for handling function calls within complex, multi-step agentic workflows.

关于 Ministral

The Ministral model family, developed by Mistral AI, includes 3B and 8B parameter versions for on-device and edge computing. Designed for compute efficiency and low latency, these models support up to 128K context length. The 8B version incorporates an interleaved sliding-window attention pattern for efficient inference.


其他 Ministral 模型

评估基准

排名适用于本地LLM。

排名

#23

基准分数排名

General Knowledge

MMLU

0.65

16

排名

排名

#23

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
63k
125k

所需显存:

推荐 GPU