ApX logo

Qwen3-1.7B

Parameters

1.7B

Context Length

32.768K

Modality

Text

Architecture

Dense

License

Apache 2.0

Release Date

29 Apr 2025

Knowledge Cutoff

-

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

-

Number of Layers

32

Attention Heads

32

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Qwen3-1.7B

Qwen3-1.7B is a dense causal language model developed by Alibaba's Qwen Team, introduced as part of the Qwen3 series on April 29, 2025. This model is designed for general-purpose language tasks and is characterized by its compact 1.7 billion parameter count. Its architecture is optimized for efficient operation across various hardware configurations, including environments with limited resources and edge devices. The model supports a context length of 32,768 tokens, enabling it to process extensive documents and conversations.

A distinguishing architectural feature within the Qwen3 series, including the 1.7B variant, is its dual operational modes: "Thinking Mode" and "Non-Thinking Mode." The Thinking Mode facilitates complex logical reasoning, such as mathematical problem-solving and code generation, through a step-by-step reasoning process. In contrast, the Non-Thinking Mode provides rapid, direct responses suitable for general conversational applications. This hybrid approach enables dynamic switching between modes, optimizing performance based on task complexity and efficiency requirements.

The model's architecture consists of 28 transformer layers, employing Grouped Query Attention (GQA) with 16 query heads and 8 key-value heads. It integrates Rotary Positional Embeddings (RoPE), specifically enhanced with ABF-RoPE, to maintain positional information accuracy across its extended context length. Further architectural refinements include the implementation of qk layernorm and RMSNorm with pre-normalization for stable training. Qwen3-1.7B demonstrates robust multilingual support, processing over 100 languages and dialects, and features advanced agent capabilities for tool integration.

About Qwen 3

The Alibaba Qwen 3 model family comprises dense and Mixture-of-Experts (MoE) architectures, with parameter counts from 0.6B to 235B. Key innovations include a hybrid reasoning system, offering 'thinking' and 'non-thinking' modes for adaptive processing, and support for extensive context windows, enhancing efficiency and scalability.


Other Qwen 3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

No evaluation benchmarks for Qwen3-1.7B available.

Rankings

Overall Rank

-

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
16k
32k

VRAM Required:

Recommended GPUs

Qwen3-1.7B: Specifications and GPU VRAM Requirements