ApX logoApX logo

GreenMind-14B-R1

Parameters

14B

Context Length

32.768K

Modality

Text

Architecture

Dense

License

Apache-2.0

Release Date

23 Sept 2024

Knowledge Cutoff

Sep 2024

Technical Specifications

Attention Structure

Multi-Head Attention

Hidden Dimension Size

5120

Number of Layers

40

Attention Heads

40

Key-Value Heads

8

Activation Function

SwigLU

Normalization

RMS Normalization

Position Embedding

Absolute Position Embedding

GreenMind-14B-R1

GreenMind-14B-R1 is a 14.7 billion parameter Vietnamese reasoning model developed by GreenNode. It is a dense, decoder-only transformer derived from the Qwen2.5-14B-Instruct base architecture. The model is specifically engineered for multi-step logical reasoning and high-fidelity text generation in the Vietnamese language, addressing common limitations such as language mixing and factual drift in long-form reasoning chains. By implementing Chain-of-Thought (CoT) methodologies, GreenMind is designed to decompose complex queries into intermediate logical steps before producing a final response.

The model utilizes a specialized fine-tuning strategy known as Group Relative Policy Optimization (GRPO), which optimizes the reasoning process while maintaining computational efficiency. This training approach is augmented by a curated Vietnamese instruction dataset consisting of over 55,000 samples spanning cultural, legal, and educational domains. To ensure linguistic consistency, the training pipeline incorporates specific reward functions and Sentence Transformer-based verification to prevent the intrusion of non-Vietnamese characters and to preserve the factual integrity of the reasoning trajectories.

Optimized for deployment via NVIDIA NIM, GreenMind-14B-R1 is intended for enterprise-grade applications including legal and financial assistants, context-aware conversational agents, and complex document retrieval systems. The architecture supports a context length of up to 131,072 tokens for input processing, with a maximum generation limit of 8,192 tokens. Its integration of modern transformer techniques like RoPE position embeddings and SwiGLU activation makes it a technically sophisticated tool for localized AI infrastructure in Vietnam.

About GreenMind

GreenMind is an open-source Vietnamese reasoning language model family developed by GreenNode. It is optimized for multi-step reasoning tasks in Vietnamese, such as logic, mathematics, and scenario analysis. The model is designed to run efficiently on single-GPU hardware configurations.


Other GreenMind Models
  • No related models available

Evaluation Benchmarks

No evaluation benchmarks for GreenMind-14B-R1 available.

Rankings

Overall Rank

-

Coding Rank

-

Model Transparency

Total Score

B+

72 / 100

GreenMind-14B-R1 Transparency Report

Total Score

72

/ 100

B+

Audit Note

GreenMind-14B-R1 exhibits a strong transparency profile regarding its architectural origins and licensing, benefiting from its foundation on well-documented open-source components. While it provides good clarity on its reasoning methodology and hardware requirements, it lacks granular detail in its dataset composition and a formal versioning system to track long-term model drift. The model's integration into standardized deployment frameworks enhances its verifiability for enterprise use.

Upstream

21.5 / 30

Architectural Provenance

7.5 / 10

The model is explicitly identified as being derived from the Qwen2.5-14B-Instruct base architecture. GreenNode provides documentation on the transition from the base model to a reasoning-focused variant using the Group Relative Policy Optimization (GRPO) methodology. While the fine-tuning methodology is described in technical blog posts and the model card, the full pre-training procedure of the original base is inherited from Qwen, and the specific architectural modifications for the 'R1' reasoning behavior (such as the implementation of Chain-of-Thought) are documented in the context of the fine-tuning pipeline.

Dataset Composition

5.5 / 10

GreenNode discloses that the model was trained on a curated Vietnamese instruction dataset of approximately 55,000 samples. This dataset is described as covering cultural, legal, and educational domains. However, a precise percentage breakdown of the data sources (e.g., web vs. proprietary vs. synthetic) is not provided in a granular format. While the filtering methodology (using Sentence Transformers for linguistic verification) is mentioned, the lack of a comprehensive public breakdown or access to the full raw dataset limits the score to the moderate range.

Tokenizer Integrity

8.5 / 10

The model utilizes the standard Qwen2.5 tokenizer, which is publicly accessible and well-documented. The vocabulary size (151,936 tokens) and the BPE-based approach are verified through the configuration files on Hugging Face. The tokenizer's support for Vietnamese is explicitly addressed in the training documentation, where specific reward functions were used to ensure the model maintains linguistic integrity and avoids language mixing during long-form reasoning.

Model

29.5 / 40

Parameter Density

8.0 / 10

The model is clearly defined as a dense, decoder-only transformer with 14.7 billion total parameters. Unlike Mixture-of-Experts (MoE) models, all parameters are active during inference, which is explicitly stated. The architectural configuration (hidden size, number of layers, and attention heads) is fully transparent via the public config.json file on the Hugging Face repository.

Training Compute

6.0 / 10

GreenNode has publicly stated that the model was fine-tuned using a cluster of 8 NVIDIA H100 Tensor Core GPUs. While the hardware type and the duration of the refinement process (six months of development) are mentioned, the exact total GPU-hours and the specific carbon footprint or detailed cost breakdown are not provided. This represents moderate transparency regarding the compute resources used for the specific 'R1' fine-tuning phase.

Benchmark Reproducibility

6.5 / 10

The model's performance is cited against benchmarks like VLMU and VLSP, and it is integrated into the NVIDIA NIM framework, which requires technical validation. However, while the model card provides a 'Quickstart' code snippet for inference, a dedicated, public evaluation repository containing the exact prompts and few-shot examples used for all reported benchmarks is not fully detailed. This makes independent third-party reproduction of the exact stated scores more difficult.

Identity Consistency

9.0 / 10

The model demonstrates high identity consistency, correctly identifying itself as a Vietnamese reasoning model developed by GreenNode. It does not exhibit confusion with its base architecture (Qwen) in its system prompts or documentation. The versioning (14B-R1) is clearly maintained across the official page, Hugging Face, and NVIDIA NIM documentation.

Downstream

21.0 / 30

License Clarity

9.0 / 10

The model is released under the Apache-2.0 license, which is a standard, highly permissive open-source license. The terms for commercial use, modification, and distribution are clear and well-understood. There are no conflicting proprietary terms found in the official documentation that override this license.

Hardware Footprint

7.0 / 10

Hardware requirements are documented through its integration with NVIDIA NIM and general 14B parameter model guidelines. VRAM requirements for standard deployment (approx. 28-30GB for FP16) are inferable, and the model supports quantization (e.g., 4-bit) to run on consumer hardware like the RTX 3090/4090. However, a specific 'accuracy vs. quantization' tradeoff table for this specific Vietnamese reasoning variant is not explicitly provided in the primary documentation.

Versioning Drift

5.0 / 10

The model uses a clear naming convention (GreenMind-14B-R1), but there is no publicly accessible, detailed changelog or semantic versioning history (e.g., v1.1, v1.2) that tracks specific weight updates or behavior drift over time. While the release date is known, the lack of a formal version tracking system for future iterations limits the transparency of the model's evolution.

GPU Requirements

Full Calculator

Choose the quantization method for model weights

Context Size: 1,024 tokens

1k
16k
32k

VRAM Required:

Recommended GPUs

GreenMind-14B-R1: Specifications and GPU VRAM Requirements