Now that we've discussed the elements of a well-structured Python development environment for LLM workflows, let's put these ideas into practice. This section provides concrete steps to configure your local setup, building upon the concepts of virtual environments, dependency management, and secure API key handling covered earlier. Following these steps will give you a clean, reproducible foundation for the projects in this course.
We assume you have a suitable version of Python installed on your system, as discussed in the "Choosing Your Python Version" section.
First, create a dedicated folder for your LLM project. Open your terminal or command prompt and navigate to where you usually store your development projects. Then, create and enter the new directory:
mkdir python-llm-course-project
cd python-llm-course-project
Keeping projects in separate directories is standard practice and helps maintain organization.
Inside your project directory, create a virtual environment using Python's built-in venv
module. We'll name the environment directory .venv
(the leading dot often signifies hidden files/folders, which is common for environment directories).
python -m venv .venv
This command creates the .venv
subdirectory containing a copy of the Python interpreter and the necessary tools to manage packages independently for this project.
Before installing packages, you need to activate the environment. The activation command differs slightly between operating systems:
macOS / Linux (bash/zsh):
source .venv/bin/activate
Windows (Command Prompt):
.venv\Scripts\activate.bat
Windows (PowerShell):
.venv\Scripts\Activate.ps1
(You might need to adjust your PowerShell execution policy: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
)
Once activated, your terminal prompt will typically change to show the environment name (e.g., (.venv) your-user@hostname:~/python-llm-course-project$
), indicating that pip
commands will now operate within this isolated environment.
With the virtual environment active, let's install some foundational libraries we'll use throughout the course using pip
. We'll start with python-dotenv
for managing environment variables (like API keys) and the core LangChain package.
python -m pip install python-dotenv langchain openai
Note: Depending on the specific LLM providers you plan to use later, you might need additional packages like anthropic
, google-generativeai
, etc. We'll install those as needed.
requirements.txt
FileTo ensure your environment is reproducible and to easily share dependencies, generate a requirements.txt
file. This file lists all the packages installed in your current environment along with their specific versions.
python -m pip freeze > requirements.txt
You can inspect the generated requirements.txt
file. It will contain entries like langchain==X.Y.Z
, openai==A.B.C
, and their dependencies. If someone else wants to set up the same environment, they can run pip install -r requirements.txt
.
.env
file)As discussed in "Setting Up API Keys Securely", we'll use a .env
file to store sensitive credentials. Create a file named .env
in the root of your project directory (python-llm-course-project
). Add your API keys in the following format:
# .env file
OPENAI_API_KEY='your_actual_openai_api_key_here'
# ANTHROPIC_API_KEY='your_anthropic_key_if_needed'
# Add other keys as required
Replace 'your_actual_openai_api_key_here'
with your real key. Important: Never commit this .env
file to version control systems like Git.
.gitignore
To prevent accidental commits of your virtual environment directory and your sensitive .env
file, create a .gitignore
file in your project root. Add the following lines:
# .gitignore file
# Virtual Environment
.venv/
# Environment variables
.env
# Python cache files
__pycache__/
*.pyc
This tells Git to ignore these files and directories.
Let's create a small Python script (verify_setup.py
) in the project root to confirm that the environment is active, packages are installed, and API keys can be loaded.
# verify_setup.py
import os
from dotenv import load_dotenv
print("Attempting to load environment variables...")
# Load variables from the .env file into environment variables
load_success = load_dotenv()
if load_success:
print(".env file loaded successfully.")
else:
print("Warning: .env file not found or failed to load.")
# Attempt to retrieve the API key from environment variables
api_key = os.getenv("OPENAI_API_KEY")
if api_key:
# Basic check without printing the key itself
print("OpenAI API Key found in environment variables.")
print(f"Key Length: {len(api_key)}")
else:
print("Error: OpenAI API Key not found in environment variables.")
print("Please ensure it's set correctly in your .env file and the file is loaded.")
# Verify package import
try:
from langchain_core.prompts import PromptTemplate
print("Successfully imported PromptTemplate from langchain_core.")
except ImportError as e:
print(f"Error importing from LangChain: {e}")
print("Please ensure LangChain is installed correctly in your virtual environment.")
Run this script from your terminal (make sure your virtual environment is still active):
python verify_setup.py
If everything is configured correctly, you should see output confirming that the .env
file was loaded, the API key was found (along with its length, not the key itself), and the LangChain component was imported successfully. If you encounter errors, carefully review the preceding steps, ensuring the .env
file exists in the correct location, the virtual environment is active, and the libraries were installed without issues.
At this point, your project directory should look similar to this structure:
Your project directory contains the virtual environment (
.venv
), API key storage (.env
), ignored files list (.gitignore
), dependencies list (requirements.txt
), and your Python script (verify_setup.py
).
You now have a functional and isolated Python environment ready for developing LLM applications. This setup provides a stable base for installing further libraries, writing code, and managing dependencies throughout the course without interfering with other projects or your global Python installation.
© 2025 ApX Machine Learning