ApX logo

Command R Plus

Parameters

104B

Context Length

128K

Modality

Text

Architecture

Dense

License

CC-BY-NC

Release Date

4 Apr 2024

Knowledge Cutoff

Feb 2023

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

-

Number of Layers

-

Attention Heads

-

Key-Value Heads

-

Activation Function

-

Normalization

-

Position Embedding

Absolute Position Embedding

System Requirements

VRAM requirements for different quantization methods and context sizes

Command R Plus

Cohere Command R Plus is a large language model developed by Cohere, designed to support demanding enterprise applications. This model is optimized for conversational interactions and tasks requiring extensive context, such as advanced Retrieval Augmented Generation (RAG) and multi-step tool use. Its primary function is to enable organizations to deploy sophisticated AI capabilities in production environments by balancing computational efficiency with high accuracy. The model offers comprehensive multilingual support, having been trained and evaluated across ten key global business languages, with additional pre-training data from thirteen other languages, facilitating its applicability in diverse linguistic contexts.

From an architectural standpoint, Command R Plus utilizes an optimized transformer architecture. Following its pretraining phase, the model undergoes supervised fine-tuning and preference training processes. These alignment procedures are crucial for refining the model's behavior to meet human preferences for helpfulness and safety in its generative outputs. A notable technical characteristic is its support for an expansive context window, capable of processing up to 128,000 tokens. This significant context length is achieved through specialized design, incorporating innovations in positional encodings to manage long dependencies effectively.

In terms of operational performance and applications, Command R Plus is engineered for large-scale production workloads. It demonstrates proficiency in complex RAG workflows, delivering grounded responses with inline citations to enhance reliability and mitigate issues such as hallucination. The model's multi-step tool use capabilities enable it to automate intricate business processes, including structured data analysis and dynamic updates within systems like customer relationship management (CRM). Furthermore, it possesses the ability to perform self-correction in instances of tool failure, thereby increasing the overall success rate of automated tasks. An August 2024 update to the model introduced enhancements, resulting in approximately 50% higher throughput and 25% lower latencies while maintaining the same hardware footprint.

About Command


Other Command Models

Evaluation Benchmarks

Ranking is for Local LLMs.

Rank

#46

BenchmarkScoreRank

General Knowledge

MMLU

0.69

10

0.38

12

0.38

15

Agentic Coding

LiveBench Agentic

0.02

19

0.49

23

0.27

26

0.22

27

0.23

30

Rankings

Overall Rank

#46

Coding Rank

#38

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
63k
125k

VRAM Required:

Recommended GPUs

Command R Plus: Specifications and GPU VRAM Requirements