This course provides a technical guide to customizing pre-trained Large Language Models. You will learn the complete workflow, from preparing custom datasets to applying various fine-tuning techniques, including full parameter updates and parameter-efficient methods like LoRA. The material is structured to build your practical skills in adapting LLMs for specialized tasks, focusing on the implementation details and evaluation of model performance. By the end, you will be equipped to modify existing foundation models to better suit specific informational domains and functional requirements.
Prerequisites Python and ML basics
Level:
Data Preparation
Structure and preprocess custom datasets suitable for instruction-based or conversational fine-tuning.
Fine-Tuning Techniques
Implement both full parameter and parameter-efficient fine-tuning (PEFT) methods on a foundation model.
LoRA Implementation
Apply Low-Rank Adaptation (LoRA) to efficiently fine-tune large models with reduced computational overhead.
Model Evaluation
Assess the performance of a fine-tuned model using both quantitative metrics and qualitative analysis.
© 2026 ApX Machine LearningEngineered with