Parameters
35B
Context Length
128K
Modality
Text
Architecture
Dense
License
CC-BY-NC
Release Date
11 Mar 2024
Knowledge Cutoff
-
Attention Structure
Multi-Head Attention
Hidden Dimension Size
-
Number of Layers
-
Attention Heads
-
Key-Value Heads
-
Activation Function
-
Normalization
Layer Normalization
Position Embedding
Absolute Position Embedding
VRAM requirements for different quantization methods and context sizes
Cohere Command R is a generative language model optimized for enterprise-scale applications, particularly focusing on long-context tasks, retrieval-augmented generation (RAG), and multi-step tool use. It is designed to enable companies to move beyond proof-of-concept AI into production deployments by balancing efficiency with accuracy. The model offers strong capabilities across 10 major languages of global business, including English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese, with its pre-training data also including many other languages to improve global versatility.
The architecture of Command R is based on an optimized Transformer design, allowing it to handle an extended context window of 128,000 tokens. This long context capability is crucial for processing extensive documents or multi-document conversations, ensuring coherent and contextually grounded responses. The model has been rigorously fine-tuned through supervised fine-tuning (SFT) on instruction-following data and preference tuning, similar to reinforcement learning from human feedback, to align its behavior with user expectations and enhance helpfulness and safety. Command R also features specialized training for grounded generation, allowing it to generate responses with citations from provided document snippets, a key component for robust RAG implementations.
Command R is engineered for practical enterprise use cases, excelling in tasks such as document summarization, question answering, and complex workflow automation. It supports both single-step and multi-step tool use, enabling interaction with external APIs, databases, or search engines. This functionality allows the model to perform actions and integrate with various internal and external systems. Furthermore, the model has demonstrated improved decision-making regarding tool utilization and the ability to follow instructions provided in system messages, along with enhanced structured data analysis.
Ranking is for Local LLMs.
Rank
#49
Benchmark | Score | Rank |
---|---|---|
Refactoring Aider Refactoring | 0.38 | 12 |
Coding Aider Coding | 0.38 | 15 |
Agentic Coding LiveBench Agentic | 0.02 | 19 |
Coding LiveBench Coding | 0.26 | 27 |
Reasoning LiveBench Reasoning | 0.21 | 28 |
Data Analysis LiveBench Data Analysis | 0.40 | 28 |
Mathematics LiveBench Mathematics | 0.18 | 31 |
Overall Rank
#49
Coding Rank
#40
Full Calculator
Choose the quantization method for model weights
Context Size: 1,024 tokens