Working with different Large Language Model providers can be complicated. Each service, from OpenAI to Anthropic to Google, has its own API structure, authentication methods, and model naming conventions. A unified configuration system solves this by creating a consistent interface, allowing you to define settings for multiple providers and models in one place. This makes your application more flexible and easier to maintain, as switching between models or adding new ones becomes a simple configuration change rather than a code overhaul.
The toolkit's configuration is managed through three main components: ConfigManager, ProviderConfig, and ModelConfig.
gpt-4o-mini or claude-3-5-haiku, including default generation parameters and which provider it belongs to.Let's start by creating a configuration manager. It's good practice to enable auto_load_env=True, which allows the manager to automatically discover API keys from your system's environment variables.
from kerb.config import ConfigManager, ModelConfig, ProviderConfig
from kerb.config.enums import ProviderType
# Create a configuration manager
config = ConfigManager(
app_name="my_llm_app",
auto_load_env=True
)
Next, we'll define configurations for the providers we want to use. The most important setting here is api_key_env_var, which specifies the name of the environment variable containing the API key. This is a security best practice that prevents you from hardcoding sensitive credentials in your source code.
# Configure the OpenAI provider
openai_provider = ProviderConfig(
provider=ProviderType.OPENAI,
api_key_env_var="OPENAI_API_KEY"
)
# Configure the Anthropic provider
anthropic_provider = ProviderConfig(
provider=ProviderType.ANTHROPIC,
api_key_env_var="ANTHROPIC_API_KEY"
)
# Add the providers to the manager
config.add_provider(openai_provider)
config.add_provider(anthropic_provider)
With the providers defined, you can now configure specific models. Each ModelConfig is linked to a ProviderConfig and can hold default parameters like max_tokens or temperature that will be used for generation calls with that model.
# Configure an OpenAI model
gpt4o_mini_config = ModelConfig(
name="gpt-4o-mini",
provider=ProviderType.OPENAI,
max_tokens=4096,
temperature=0.5
)
# Configure an Anthropic model
claude_haiku_config = ModelConfig(
name="claude-3-5-haiku-20241022",
provider=ProviderType.ANTHROPIC,
max_tokens=1024,
temperature=0.7
)
# Add the models to the manager
config.add_model(gpt4o_mini_config)
config.add_model(claude_haiku_config)
The diagram below illustrates how these components fit together. The ConfigManager holds everything, while each ModelConfig points to a ProviderConfig that handles the actual API communication.
The
ConfigManagerserves as a central registry forProviderConfigandModelConfigobjects, creating a structured and manageable configuration system.
Storing API keys directly in your code is a significant security risk. The recommended approach is to use environment variables, which keep your credentials separate from your application logic. The toolkit is designed to work with this pattern.
To make the previous code work, you would set the corresponding environment variables in your terminal before running your application:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
When you make an API call, the ConfigManager automatically finds the correct ProviderConfig, reads the API key from the specified environment variable, and authenticates the request. This setup not only improves security but also makes it easy to manage different keys for development, staging, and production environments.
Once set up, you can easily retrieve configurations for any model you have defined.
# Retrieve the configuration for a specific model
retrieved_model = config.get_model("gpt-4o-mini")
if retrieved_model:
print(f"Model: {retrieved_model.name}")
print(f"Provider: {retrieved_model.provider.value}")
print(f"Default Temperature: {retrieved_model.temperature}")
This configuration object is what the toolkit's generate function uses behind the scenes to direct your request to the correct LLM provider with the right settings. You now have a flexible foundation for making generation calls, which we will cover in the next section.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with