ApX logo

Phi-4-Mini

Parameters

3.8B

Context Length

128K

Modality

Text

Architecture

Dense

License

MIT

Release Date

27 Feb 2025

Knowledge Cutoff

Jun 2024

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

3072

Number of Layers

32

Attention Heads

24

Key-Value Heads

8

Activation Function

-

Normalization

-

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Phi-4-Mini

Microsoft Phi-4-Mini is a lightweight, open model from the Phi-4 family, engineered to operate efficiently in resource-constrained environments. This model is constructed from a combination of high-quality synthetic data and filtered public web content, with a particular emphasis on data dense in reasoning. Its core architecture is a dense, decoder-only Transformer, optimized with techniques such as grouped-query attention (GQA) and LongRoPE positional encoding to enhance inference speed and manage extended context lengths. The model incorporates an expanded vocabulary of 200,064 tokens, facilitating broad multilingual support.

Key advancements in Phi-4-Mini include an enhancement process that integrates supervised fine-tuning (SFT) and direct preference optimization (DPO), along with Reinforcement Learning from Human Feedback (RLHF) for robust instruction adherence and safety measures. This training methodology enables the model to exhibit strong reasoning capabilities, particularly in mathematical and logical tasks, and supports advanced functions such as function calling. The design prioritizes computational efficiency and low-latency performance, making it suitable for deployment in scenarios where memory and processing power are limited.

The intended use cases for Phi-4-Mini span general-purpose AI systems and applications that require strong reasoning in memory or compute-constrained environments, or those with latency-bound requirements. It is designed to accelerate research in language models and serve as a foundational building block for generative AI features. The model's compact size and optimized architecture allow for deployment on edge devices, including various mobile operating systems, by leveraging tools such as Microsoft Olive and the ONNX GenAI Runtime.

About Phi-4

The Microsoft Phi-4 model family comprises small language models prioritizing efficient, high-capability reasoning. Its development emphasizes robust data quality and sophisticated synthetic data integration. This approach enables enhanced performance and on-device deployment capabilities.


Other Phi-4 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#40

BenchmarkScoreRank

Graduate-Level QA

GPQA

0.52

12

Professional Knowledge

MMLU Pro

0.53

21

General Knowledge

MMLU

0.25

36

Rankings

Overall Rank

#40

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs