Parameters
7.1B
Context Length
2.048K
Modality
Text
Architecture
Dense
License
Apache-2.0
Release Date
1 Dec 2023
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
-
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
SEA-LION-7B is a 7.1 billion parameter base model designed for Southeast Asian languages. It is trained on a 1 trillion token corpus including regional news and web content. The model uses the MPT architecture and is released under the Apache 2.0 license.
Southeast Asian Languages In One Network (SEA-LION) is a family of language models developed by AI Singapore for Southeast Asian languages. The models support English, Indonesian, Malay, Thai, Vietnamese, Tagalog, Burmese, Khmer, Lao, Tamil, and Chinese. It focuses on regional linguistic patterns and is available in base and instruction-tuned variants.
No evaluation benchmarks for SEA-LION-7B available.
Overall Rank
-
Coding Rank
-
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens