To perform vector operations in machine learning tasks, Python's NumPy library is frequently used. Practical examples are provided to help you become comfortable performing these operations in code.First, make sure you have NumPy installed and import it. It's standard practice to import it under the alias np.import numpy as npRepresenting VectorsAs discussed earlier, we represent vectors in NumPy using one-dimensional arrays. Let's create a couple of vectors to work with.# Create two vectors v = np.array([1, 2, 3]) w = np.array([4, 5, 6]) print("Vector v:", v) print("Vector w:", w)This creates two vectors, $v = [1, 2, 3]$ and $w = [4, 5, 6]$.Vector Addition and SubtractionAdding or subtracting vectors in NumPy is straightforward and performed element-wise, just like the mathematical definition.# Vector Addition vector_sum = v + w print("v + w =", vector_sum) # Vector Subtraction vector_diff = v - w print("v - w =", vector_diff)The output shows the results of $[1+4, 2+5, 3+6]$ and $[1-4, 2-5, 3-6]$. NumPy handles the element-wise operations automatically.Scalar MultiplicationMultiplying a vector by a scalar (a single number) is also simple. Each element of the vector is multiplied by the scalar.# Define a scalar s = 2 # Scalar Multiplication scaled_v = s * v print(f"{s} * v =", scaled_v) scaled_w = w * 0.5 print(f"0.5 * w =", scaled_w)Here, vector v is multiplied by 2, resulting in $[21, 22, 2*3] = [2, 4, 6]$. Vector w is multiplied by 0.5.Vector Norms: Measuring LengthNumPy's linalg submodule provides functions to calculate vector norms. The most common norms are the $L_2$ norm (Euclidean distance) and the $L_1$ norm (Manhattan distance).The $L_2$ norm of a vector $x = [x_1, x_2, ..., x_n]$ is calculated as: $$||x||2 = \sqrt{\sum{i=1}^{n} x_i^2}$$The $L_1$ norm is calculated as: $$||x||1 = \sum{i=1}^{n} |x_i|$$# Calculate L2 norm (default) norm_v_l2 = np.linalg.norm(v) print(f"L2 norm of v: {norm_v_l2:.4f}") # Format to 4 decimal places # Calculate L1 norm norm_v_l1 = np.linalg.norm(v, ord=1) print(f"L1 norm of v: {norm_v_l1}") # Calculate L2 norm for w norm_w_l2 = np.linalg.norm(w) print(f"L2 norm of w: {norm_w_l2:.4f}") # Calculate L1 norm for w norm_w_l1 = np.linalg.norm(w, ord=1) print(f"L1 norm of w: {norm_w_l1}")The np.linalg.norm function calculates the $L_2$ norm by default. To calculate the $L_1$ norm, we specify ord=1.For $v = [1, 2, 3]$: $L_2$ norm $= \sqrt{1^2 + 2^2 + 3^2} = \sqrt{1 + 4 + 9} = \sqrt{14} \approx 3.7417$ $L_1$ norm $= |1| + |2| + |3| = 1 + 2 + 3 = 6$The code output matches these calculations.The Dot ProductThe dot product of two vectors $v = [v_1, v_2, ..., v_n]$ and $w = [w_1, w_2, ..., w_n]$ is calculated as: $$v \cdot w = \sum_{i=1}^{n} v_i w_i$$NumPy offers several ways to compute the dot product.# Method 1: Using np.dot() dot_product_np_dot = np.dot(v, w) print(f"Dot product using np.dot(v, w): {dot_product_np_dot}") # Method 2: Using the @ operator (preferred for Python 3.5+) # This operator is specifically designed for matrix/vector multiplication dot_product_at = v @ w print(f"Dot product using v @ w: {dot_product_at}") # Method 3: Using the .dot() method of a NumPy array dot_product_method = v.dot(w) print(f"Dot product using v.dot(w): {dot_product_method}") All three methods yield the same result for the dot product of $v = [1, 2, 3]$ and $w = [4, 5, 6]$: $v \cdot w = (14) + (25) + (3*6) = 4 + 10 + 18 = 32$The @ operator is often preferred in modern Python code for its clarity, visually distinguishing dot products from element-wise multiplication (*).This hands-on section demonstrated how to translate the vector operations we learned into working NumPy code. You can now create vectors, add them, scale them, measure their lengths, and compute their dot products. These operations form the building blocks for many algorithms in machine learning. In the next chapters, we will extend these ideas to matrices.