Home
Blog
Courses
LLMs
EN
All Courses
Prompt Engineering and LLM Application Development
Chapter 1: Foundations of Prompt Engineering
Introduction to Large Language Models
What is Prompt Engineering?
Components of a Prompt
Basic Prompting Techniques
Understanding LLM Temperature and Other Parameters
Hands-on practical: Simple Prompt Experiments
Quiz for Chapter 1
Chapter 2: Advanced Prompting Strategies
Zero-Shot Prompting
Few-Shot Prompting
Instruction Following Prompts
Role Prompting
Structuring Output Formats (JSON, Markdown)
Chain-of-Thought Prompting
Self-Consistency Prompting
Practice: Applying Advanced Techniques
Quiz for Chapter 2
Chapter 3: Prompt Design, Iteration, and Evaluation
Principles of Effective Prompt Design
Managing Prompt Length and Context Windows
Iterative Prompt Refinement
Evaluating Prompt Performance
Automated Prompt Testing Approaches
Version Control for Prompts
Hands-on practical: Prompt Optimization Challenge
Quiz for Chapter 3
Chapter 4: Interacting with LLM APIs
Overview of Common LLM APIs (OpenAI, Anthropic, etc.)
API Authentication and Security
Making API Requests with Python
Understanding API Request Parameters
Processing API Responses
Handling API Errors and Rate Limits
Streaming Responses
Hands-on practical: Build a Simple Q&A Bot
Quiz for Chapter 4
Chapter 5: Building Applications with LLM Frameworks
Introduction to LLM Frameworks (e.g., LangChain)
Core Components: Models, Prompts, Parsers
Understanding Chains
Managing Memory in LLM Applications
Introduction to Agents
Using Tools with Agents
Hands-on practical: Develop an Agentic Application
Quiz for Chapter 5
Chapter 6: Integrating LLMs with External Data (RAG)
Limitations of Standard LLM Knowledge
Introduction to Retrieval Augmented Generation (RAG)
Document Loading and Splitting
Text Embedding Models
Introduction to Vector Stores
Implementing Semantic Search/Retrieval
Combining Retrieved Context with Prompts
Basic RAG Pipeline Implementation
Hands-on practical: Build a RAG Q&A System for Documents
Quiz for Chapter 6
Chapter 7: Output Parsing, Validation, and Application Reliability
Challenges with LLM Output Consistency
Prompting for Structured Data (Revisited)
Using Output Parsers
Data Validation Techniques (e.g., Pydantic)
Handling Parsing Errors
Implementing Retry Mechanisms
Moderation and Content Filtering APIs
Practice: Implementing Strong Output Handling
Quiz for Chapter 7
Chapter 8: Application Development Considerations
Structuring LLM Application Code
Managing API Keys and Secrets
Cost Estimation and Monitoring
Basic Caching Strategies
Testing LLM Applications
Simple Deployment Options (Serverless, Containers)
Hands-on practical: Containerizing a Simple LLM App
Quiz for Chapter 8
Understanding API Request Parameters
Was this section helpful?
Helpful
Report Issue
Mark as Complete
© 2025 ApX Machine Learning
Understanding LLM API Request Parameters