Searching through massive collections of high-dimensional vectors to find the exact nearest neighbors can be computationally expensive, often too slow for interactive applications. When dealing with the scale encountered in vector databases, performing an exhaustive search for every query is usually impractical.
This chapter addresses this challenge by introducing Approximate Nearest Neighbor (ANN) search. You will learn why approximation is often necessary and how ANN algorithms provide a practical solution by trading a small degree of accuracy for substantial improvements in search speed and resource usage.
We will cover:
The chapter concludes with a hands-on practical session where you will experiment with building different ANN indexes and observe the resulting performance characteristics.
3.1 The Need for Approximation
3.2 Core Concepts of ANN
3.3 Algorithm Overview: HNSW
3.4 Algorithm Overview: IVF
3.5 Algorithm Overview: LSH
3.6 Indexing Parameters and Tuning
3.7 Evaluating ANN Performance
3.8 Hands-on Practical: Experimenting with Index Parameters
© 2025 ApX Machine Learning