趋近智
注意力结构
Multi-Head Attention
隐藏维度大小
-
层数
-
注意力头
-
键值头
-
激活函数
SwigLU
归一化
RMS Normalization
位置嵌入
Absolute Position Embedding
不同量化方法和上下文大小的显存要求
The Yi-6B model, developed by 01.AI, is a 6-billion parameter large language model designed for efficient and accessible language processing tasks. It is part of the Yi model family, engineered to offer substantial performance while maintaining moderate resource requirements, making it suitable for both personal and academic applications. The model is distinguished by its bilingual capabilities, having been trained on an expansive 3-trillion token multilingual corpus, enabling proficiency in both English and Chinese language understanding and generation.
Architecturally, Yi-6B is built upon a dense transformer framework, diverging from Mixture-of-Experts (MoE) designs. Its attention mechanism incorporates Grouped-Query Attention (GQA), a modification applied to both the 6B and 34B Yi models. This approach is known to reduce training and inference costs compared to traditional Multi-Head Attention (MHA) without compromising performance on smaller models. The model employs SwiGLU as its activation function and RMSNorm for normalization, drawing parallels with the architectural advancements seen in models like Llama, which the Yi series is often compared to for its foundational structure. Its positional embeddings leverage the Rotary Positional Embedding (RoPE) scheme, facilitating effective context management.
The Yi-6B model is engineered for robust performance across a spectrum of natural language processing tasks, including language understanding, commonsense reasoning, and reading comprehension. Its efficient design and open-weight release under the Apache 2.0 license contribute to its applicability in various scenarios, from rapid prototyping in real-time applications to fine-tuning for specific domains. The model features a default context window of 4,096 tokens, with variants offering extended context lengths up to 200,000 tokens for handling more extensive textual inputs.
排名适用于本地LLM。
没有可用的 Yi-6B 评估基准。