ApX 标志

趋近智

Qwen3-0.6B

参数

600M

上下文长度

32.768K

模态

Text

架构

Dense

许可证

Apache 2.0

发布日期

29 Apr 2025

知识截止

-

技术规格

注意力结构

Grouped-Query Attention

隐藏维度大小

1024

层数

24

注意力头

16

键值头

8

激活函数

-

归一化

Layer Normalization

位置嵌入

ROPE

系统要求

不同量化方法和上下文大小的显存要求

Qwen3-0.6B

Qwen3-0.6B is a foundational large language model developed by Alibaba Cloud, forming part of the dense architecture variants within the Qwen3 model family. This model is engineered for efficient processing and generation of human language, addressing a spectrum of natural language understanding and generation tasks. Its compact parameter count is optimized for deployment in environments where computational efficiency is a primary design constraint, while maintaining capabilities for diverse applications such as logical reasoning, mathematical problem-solving, code synthesis, creative writing, and natural dialogue.

The Qwen3 series introduces a hybrid reasoning system that integrates both a 'thinking' mode for complex, multi-step reasoning and a 'non-thinking' mode for rapid, context-driven responses within a unified framework. This allows for dynamic mode switching based on user queries or chat templates, enabling a balance between latency and performance adaptable to task complexity. The architecture of the Qwen3 dense models, including Qwen3-0.6B, is built upon refinements observed in previous iterations, incorporating features such as Grouped Query Attention (GQA), SwiGLU activation, Rotary Positional Embeddings (RoPE), and RMSNorm with pre-normalization.

Qwen3-0.6B has been trained on an expansive corpus of approximately 36 trillion tokens, covering 119 languages and dialects. This extensive multilingual capability supports a wide range of international applications, including translation and cross-lingual information retrieval. The training regimen involves a three-stage pretraining process: an initial stage for general language competence, a second stage focused on knowledge-intensive data (e.g., STEM, coding, reasoning), and a third stage for enhancing long-context comprehension by extending training sequence lengths up to 32,768 tokens. This model also demonstrates strong agent capabilities, facilitating integration with external tools for automation and complex workflow orchestration.

关于 Qwen 3

The Alibaba Qwen 3 model family comprises dense and Mixture-of-Experts (MoE) architectures, with parameter counts from 0.6B to 235B. Key innovations include a hybrid reasoning system, offering 'thinking' and 'non-thinking' modes for adaptive processing, and support for extensive context windows, enhancing efficiency and scalability.


其他 Qwen 3 模型

评估基准

排名适用于本地LLM。

没有可用的 Qwen3-0.6B 评估基准。

排名

排名

-

编程排名

-

GPU 要求

完整计算器

选择模型权重的量化方法

上下文大小:1024 个令牌

1k
16k
32k

所需显存:

推荐 GPU