趋近智
注意力结构
Grouped-Query Attention
隐藏维度大小
-
层数
48
注意力头
80
键值头
8
激活函数
-
归一化
Layer Normalization
位置嵌入
ROPE
不同量化方法和上下文大小的显存要求
Qwen3-14B is a causal language model developed by the Qwen team at Alibaba Cloud, integrated within the Qwen3 series. This model features a dense architecture, comprising 14.8 billion parameters. A key design element is its ability for dynamic mode switching, allowing operation in a "thinking" mode for complex analytical tasks and a "non-thinking" mode for general-purpose dialogue. This dual capability aims to optimize utility across a broad range of natural language processing applications, providing enhanced reasoning for mathematics, code generation, and logical inference in thinking mode, and efficient responses for general dialogue and content generation in non-thinking mode.
Architecturally, Qwen3-14B incorporates a Grouped Query Attention (GQA) mechanism, configured with 40 query heads and 8 key/value heads, which contributes to its computational efficiency. The model is structured with 40 layers. It supports a native context length of 32,768 tokens, expandable to 131,072 tokens through the application of the YaRN (Yet another RoPE N) technique for Rotary Position Embeddings. Further refinements include the implementation of qk layernorm, integrated across all Qwen3 models to enhance training stability and performance.
The model supports over 100 languages and dialects, providing multilingual processing capabilities. Its design also enables integration with external tools, facilitating agentic functionalities for addressing multi-step problems. These characteristics position Qwen3-14B as an adaptable asset for applications requiring analytical depth, such as advanced AI assistants, as well as interactive conversational systems.
The Alibaba Qwen 3 model family comprises dense and Mixture-of-Experts (MoE) architectures, with parameter counts from 0.6B to 235B. Key innovations include a hybrid reasoning system, offering 'thinking' and 'non-thinking' modes for adaptive processing, and support for extensive context windows, enhancing efficiency and scalability.
排名适用于本地LLM。
排名
#9
| 基准 | 分数 | 排名 |
|---|---|---|
Reasoning LiveBench Reasoning | 0.74 | 🥉 3 |
Data Analysis LiveBench Data Analysis | 0.68 | 6 |
Mathematics LiveBench Mathematics | 0.73 | 12 |
Coding LiveBench Coding | 0.58 | 13 |