ApX logo

Phi-3-small

Parameters

7B

Context Length

8.192K

Modality

Text

Architecture

Dense

License

MIT License

Release Date

22 Apr 2024

Knowledge Cutoff

Oct 2023

Technical Specifications

Attention Structure

Grouped-Query Attention

Hidden Dimension Size

4096

Number of Layers

32

Attention Heads

32

Key-Value Heads

8

Activation Function

-

Normalization

-

Position Embedding

ROPE

System Requirements

VRAM requirements for different quantization methods and context sizes

Phi-3-small

Microsoft's Phi-3-small is a member of the Phi family of small language models (SLMs), engineered to deliver high performance within a compact computational footprint. This model variant, with 7 billion parameters, is positioned for broad commercial and research applications where resource efficiency and responsiveness are critical. It addresses scenarios demanding robust language understanding, logical reasoning, and efficient processing on constrained hardware environments, including on-device deployments.

The underlying architecture of Phi-3-small is a dense, decoder-only Transformer. It incorporates several design choices aimed at optimizing performance and memory efficiency, notably leveraging Grouped Query Attention (GQA) where four query heads share a single key-value head, thereby reducing the KV cache footprint. Additionally, the model utilizes alternating layers of dense and blocksparse attention mechanisms, which further contribute to efficient memory management while preserving long-context retrieval capabilities. The training methodology includes a meticulous process of Supervised Fine-tuning (SFT) and Direct Preference Optimization (DPO), ensuring the model's alignment with human preferences and safety guidelines.

Phi-3-small is designed to operate with a default context length of 8,192 tokens (8K), with a further extended variant supporting up to 128,000 tokens through the application of LongRope technology. The model's training regimen involved an extensive dataset comprising 4.8 trillion tokens, derived from a combination of rigorously filtered public documents, high-quality educational content, and synthetically generated data, emphasizing data quality and reasoning density. This enables the model to excel in tasks such as complex language understanding, mathematical problem-solving, and code generation, making it suitable for deployment across various hardware platforms, from cloud-based inference to edge devices and mobile platforms.

About Phi-3

Microsoft's Phi-3 models are small language models designed for efficient operation on resource-constrained devices. They utilize a transformer decoder architecture and are trained on extensively filtered, high-quality data, including synthetic compositions. This approach enables a compact yet capable model family.


Other Phi-3 Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#26

BenchmarkScoreRank

General Knowledge

MMLU

0.56

18

Rankings

Overall Rank

#26

Coding Rank

-

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
4k
8k

VRAM Required:

Recommended GPUs