Having covered the principles of structuring applications and the conceptual benefits of containerization, let's put theory into practice. This hands-on exercise guides you through packaging a simple LangChain application using Docker. Containerization provides a consistent, isolated environment, making deployments predictable across different machines and platforms, which is significant for production systems. Our goal here is to create a Docker image for a basic LangChain application and run it as a container locally.
Before starting, ensure you have the following installed:
docker --version
in your terminal.First, let's create a minimal LangChain application. We'll build a simple chain that takes a topic and generates a brief explanation using an LLM.
Create a directory for your project, for example, langchain_docker_app
. Inside this directory, create two files: app.py
and requirements.txt
.
requirements.txt
:
langchain>=0.1.0
langchain-openai>=0.1.0 # Or your preferred LLM provider library
python-dotenv>=1.0.0
fastapi>=0.100.0
uvicorn>=0.20.0
Note: Adjust the LLM provider library (e.g., langchain-google-genai
, langchain-anthropic
) based on the LLM you intend to use.
app.py
:
import os
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from dotenv import load_dotenv
# Load environment variables (especially API keys)
load_dotenv()
# Ensure your OPENAI_API_KEY is set in your .env file or environment
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY environment variable not set.")
# 1. Initialize FastAPI app
app = FastAPI(
title="Simple LangChain API",
description="A basic API demonstrating LangChain with Docker.",
)
# 2. Set up the LangChain components
prompt_template = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that explains technical concepts simply."),
("human", "Explain the concept of '{topic}' in one sentence."),
]
)
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
output_parser = StrOutputParser()
# 3. Create the chain using LCEL
chain = (
{"topic": RunnablePassthrough()} # Pass input directly as 'topic'
| prompt_template
| llm
| output_parser
)
# 4. Define the API endpoint
@app.post("/explain")
async def explain_topic(data: dict):
"""
Accepts a JSON object with a 'topic' key and returns an explanation.
Example: {"topic": "Docker"}
"""
topic = data.get("topic")
if not topic:
return {"error": "Missing 'topic' in request body"}, 400
try:
result = chain.invoke(topic)
return {"explanation": result}
except Exception as e:
# Basic error handling
return {"error": str(e)}, 500
# Add a root endpoint for basic checks
@app.get("/")
async def read_root():
return {"message": "LangChain API is running."}
# Note: To run this locally (without Docker first)
# You would typically use: uvicorn app:app --reload --port 8000
# Remember to create a .env file with your OPENAI_API_KEY
Create a .env
file in the same directory to store your API key securely:
.env:
OPENAI_API_KEY=your_openai_api_key_here
Replace your_openai_api_key_here
with your actual key.
The Dockerfile
contains instructions for Docker to build your application image. Create a file named Dockerfile
(no extension) in the langchain_docker_app
directory.
Dockerfile
:
# 1. Use an official Python runtime as a parent image
# Using the slim variant reduces image size
FROM python:3.11-slim
# 2. Set the working directory in the container
WORKDIR /app
# 3. Copy the requirements file into the container at /app
# Copy requirements first to leverage Docker cache
COPY requirements.txt .
# 4. Install any needed packages specified in requirements.txt
# --no-cache-dir reduces layer size
# --upgrade pip ensures we have the latest pip
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# 5. Copy the rest of the application code into the container at /app
COPY . .
# 6. Make port 8000 available to the world outside this container
# This is the port Uvicorn will run on
EXPOSE 8000
# 7. Define environment variable (optional, can be overridden)
# ENV OPENAI_API_KEY=your_default_key_if_any # Better to pass at runtime
# 8. Run app.py when the container launches
# Use uvicorn to run the FastAPI application
# --host 0.0.0.0 makes it accessible from outside the container
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
Let's break down these instructions:
FROM python:3.11-slim
: Specifies the base image. Using a slim version helps keep the final image size smaller.WORKDIR /app
: Sets the working directory inside the container. Subsequent commands (COPY
, RUN
, CMD
) will be executed relative to this directory.COPY requirements.txt .
: Copies the requirements file into the /app
directory. We copy this first to take advantage of Docker's layer caching. If requirements.txt
doesn't change, Docker reuses the layer where dependencies are installed, speeding up subsequent builds.RUN pip install ...
: Installs the Python dependencies. --no-cache-dir
prevents pip from storing downloads, reducing image size.COPY . .
: Copies the rest of your project files (like app.py
and .env
) into the /app
directory.EXPOSE 8000
: Informs Docker that the container listens on port 8000 at runtime. This is informational; you still need to map the port when running the container.CMD ["uvicorn", ...]
: Specifies the command to run when the container starts. Here, we start the Uvicorn server to serve our FastAPI application, binding it to 0.0.0.0
so it's reachable from outside the container's network namespace.Navigate to your langchain_docker_app
directory in your terminal. Run the following command to build the Docker image:
docker build -t langchain-simple-api:latest .
docker build
: The command to build an image from a Dockerfile.-t langchain-simple-api:latest
: Tags the image with a name (langchain-simple-api
) and a tag (latest
). This makes it easier to reference the image later..
: Specifies the build context (the current directory), indicating where Docker should look for the Dockerfile
and application files.Docker will execute the instructions in your Dockerfile
step by step. You'll see output indicating the progress of each step.
Once the image is built successfully, you can run it as a container:
docker run -p 8000:8000 --env-file .env --name my-langchain-container langchain-simple-api:latest
Let's analyze this command:
docker run
: The command to create and start a container from an image.-p 8000:8000
: Maps port 8000 on your host machine to port 8000 inside the container. This allows you to access the FastAPI application running inside the container via http://localhost:8000
on your machine. The format is host_port:container_port
.--env-file .env
: Loads environment variables from the specified .env
file into the container. This is a secure way to pass API keys and other configuration without hardcoding them in the Dockerfile
or image.--name my-langchain-container
: Assigns a name to the running container for easier management (e.g., stopping or viewing logs).langchain-simple-api:latest
: Specifies the image to use for creating the container.If you want the container to run in the background (detached mode), add the -d
flag:
docker run -d -p 8000:8000 --env-file .env --name my-langchain-container langchain-simple-api:latest
With the container running, you can verify the application:
Check Container Logs (especially if run in detached mode):
docker logs my-langchain-container
You should see output from Uvicorn indicating the server has started.
Access the Root Endpoint: Open your web browser or use curl
:
curl http://localhost:8000/
You should receive: {"message":"LangChain API is running."}
Test the /explain
Endpoint: Use curl
or a tool like Postman to send a POST request:
curl -X POST http://localhost:8000/explain \
-H "Content-Type: application/json" \
-d '{"topic": "Kubernetes"}'
You should receive a JSON response with an explanation of Kubernetes generated by the LangChain chain, for example:
{"explanation": "Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications."}
(The exact explanation will vary based on the LLM's response).
To stop the container running in the foreground, press Ctrl+C
in the terminal where it's running. If running in detached mode, use:
docker stop my-langchain-container
To remove the stopped container (optional, frees up the name and disk space):
docker rm my-langchain-container
You have successfully packaged a simple LangChain application using FastAPI into a Docker container and run it locally. This process demonstrates the core workflow of containerization: defining the environment and dependencies in a Dockerfile
, building a portable image, and running it predictably.
This containerized application forms the foundation for deploying to various environments discussed earlier, such as virtual machines, Kubernetes clusters, or serverless platforms. The next logical step in a production workflow would be to push this built image to a container registry (like Docker Hub, AWS ECR, Google Artifact Registry, or Azure Container Registry) from where your deployment environment can pull and run it. This practical exercise provides the essential skills needed to prepare your LangChain applications for robust deployment.
© 2025 ApX Machine Learning