While using the requests
library provides a fundamental way to interact with HTTP APIs, including those for Large Language Models, it often involves writing repetitive boilerplate code for handling authentication, structuring request bodies, and parsing responses. For many popular LLM providers, a more convenient and structured approach is to use their official Python client libraries (also known as SDKs - Software Development Kits).
These libraries act as wrappers around the underlying HTTP API calls. They offer several advantages:
requests
.OpenAI provides a popular and well-maintained Python library for interacting with its models like GPT-4 and GPT-3.5 Turbo.
First, you need to install the library:
pip install openai
Recall from Chapter 2 the importance of securely managing your API keys. Assuming you have set your OpenAI API key as an environment variable (OPENAI_API_KEY
), initializing the client and making a simple chat completion request looks like this:
import os
from openai import OpenAI
# Client automatically picks up the OPENAI_API_KEY environment variable
# You can also pass it directly: client = OpenAI(api_key="your_api_key")
client = OpenAI()
try:
# Make a request to the chat completions endpoint
response = client.chat.completions.create(
model="gpt-3.5-turbo", # Specify the model
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
max_tokens=50, # Limit the response length
temperature=0.7 # Controls randomness (0=deterministic, 1=more random)
)
# Extract the response content
# The response object is structured, making access easier
if response.choices:
message_content = response.choices[0].message.content
print(f"Assistant: {message_content.strip()}")
else:
print("No response choices received.")
# Print usage information (useful for cost tracking)
print(f"Usage: {response.usage}")
except Exception as e:
print(f"An API error occurred: {e}")
Compare this to constructing the equivalent request using requests
. You would need to manually:
Authorization: Bearer <your_api_key>
header.messages
list and other parameters into a JSON payload.https://api.openai.com/v1/chat/completions
).The openai
library handles these details, allowing you to focus on the core logic: defining the prompt (messages
), selecting the model, and processing the structured response
object.
Anthropic, the provider of the Claude family of models, also offers a Python client library. The pattern is similar, highlighting how these libraries provide a consistent developer experience for their respective services.
Install the library:
pip install anthropic
Assuming your Anthropic API key is set as ANTHROPIC_API_KEY
:
import os
from anthropic import Anthropic, AnthropicError
# Client automatically picks up the ANTHROPIC_API_KEY environment variable
# Or initialize with: client = Anthropic(api_key="your_api_key")
client = Anthropic()
try:
message = client.messages.create(
model="claude-3-opus-20240229", # Specify the model
max_tokens=100,
temperature=0.7,
system="You are a helpful assistant.", # System prompt
messages=[
{
"role": "user",
"content": "What is the main purpose of the Python requests library?"
}
]
)
# Access the response content from the structured object
if message.content:
print(f"Assistant: {message.content[0].text.strip()}") # Note the structure difference from OpenAI
else:
print("No response content received.")
# Print usage (structure differs from OpenAI)
print(f"Usage: Input Tokens={message.usage.input_tokens}, Output Tokens={message.usage.output_tokens}")
except AnthropicError as e:
# Specific error handling provided by the library
print(f"An Anthropic API error occurred: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
Notice the similarities: client initialization, a method call (messages.create
), model and parameter specification, and accessing results via a structured response object (message
). However, the specific method names, parameter names (system
prompt is separate), and response structure (message.content[0].text
) differ, reflecting Anthropic's API design.
Generally, using the official client library is recommended when one is available and maintained. It leads to more maintainable, readable, and robust code by abstracting away the HTTP details and providing vendor-specific conveniences and error handling.
You might stick with direct requests
calls if:
For most development within the Python ecosystem, leveraging these official libraries will significantly speed up your workflow and reduce the potential for errors when interacting with LLM APIs. They form a significant part of the tooling landscape we'll explore further in subsequent chapters.
© 2025 ApX Machine Learning