Learn to build sophisticated applications powered by Large Language Models using the Kerb toolkit. The progression covers fundamental text generation leading to complex, multi-step autonomous agents. You will learn to manage prompts, handle external data for Retrieval-Augmented Generation (RAG), implement conversational memory, and optimize your applications for production environments. Practical implementation is the primary focus, equipping you with the skills to build, test, and deploy modern AI systems.
The documentation is structured not only as a documentation for the Kerb toolkit, but also build foundational knowledge of building LLM applications. We begin with the core components for communicating with LLMs and gradually assemble them into complete applications. The following diagram illustrates the path ahead.
The course progresses from foundational skills to building complete RAG systems, autonomous agents, and preparing applications for production.
Before we begin, a few setup steps are necessary to ensure your environment is ready. This documentation assumes you have a working knowledge of Python and a general familiarity with what Large Language Models are and how APIs work.
The first step is to install the toolkit. You can install it directly from the Python Package Index (PyPI) using pip. Open your terminal and run the following command:
pip install kerb[all]
This command installs the core library along with all modules required for advanced topics like RAG, Agents, and Evaluation. Some specific third-party tools (like certain vector databases or document loaders) may require additional packages, which will be noted in the relevant chapters.
The toolkit provides a unified interface to interact with various LLM providers, such as OpenAI, Anthropic, and Google. To use these services, you will need to obtain API keys from their respective platforms.
Throughout this documentation, we will use models from these providers. It is recommended to have at least an OpenAI key to follow all examples.
For security and flexibility, you should never hardcode API keys directly into your source code. The best practice is to store them as environment variables. The toolkit is designed to automatically load keys from these variables, which are also standard variable conventions for setting API keys for their respective LLM packages.
Set up the following environment variables in your system.
For Linux and macOS:
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"
For Windows (Command Prompt):
set OPENAI_API_KEY="your-openai-api-key"
set ANTHROPIC_API_KEY="your-anthropic-api-key"
set GOOGLE_API_KEY="your-google-api-key"
For Windows (PowerShell):
$Env:OPENAI_API_KEY="your-openai-api-key"
$Env:ANTHROPIC_API_KEY="your-anthropic-api-key"
$Env:GOOGLE_API_KEY="your-google-api-key"
By setting these variables, the toolkit's configuration module can securely access your credentials without exposing them in your code. We will explore this configuration system in more detail later in this chapter. With your environment now configured, you are ready to start building.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with