Having prepared your system and gained an understanding of Large Language Models and where to find them, this chapter focuses on the practical steps to get an LLM running on your machine.
You will learn about software tools designed specifically to simplify the process of downloading, managing, and interacting with local LLMs. We will cover the setup and basic usage of two popular options: Ollama, a command-line tool, and LM Studio, a graphical application. You will see how to use these tools to download a model file (often in the .gguf
format) and then load it for interaction. We will also briefly touch upon llama.cpp
, a fundamental library that powers many of these tools.
The objective is to guide you through downloading and running your first LLM, enabling basic text generation directly on your computer.
4.1 Introduction to Local LLM Runners
4.2 Setting up Ollama
4.3 Downloading a Model with Ollama
4.4 Running a Model with Ollama (Command Line)
4.5 Setting up LM Studio
4.6 Finding and Downloading Models in LM Studio
4.7 Loading and Chatting with a Model in LM Studio
4.8 Introduction to llama.cpp (Concept)
4.9 Hands-on Practical: Running a Model
© 2025 ApX Machine Learning