Now that we understand how vectors represent data, let's look at the fundamental operations we can perform on them. These operations form the building blocks for many machine learning algorithms. We'll cover addition, subtraction, and scalar multiplication, exploring both their mathematical definitions and geometric interpretations.
Adding two vectors is straightforward: you add their corresponding elements. If you have two vectors v and w of the same dimension (say, in Rn), their sum v+w is a new vector where each element is the sum of the elements at the same position in v and w.
Mathematically, if v=[v1,v2,...,vn] and w=[w1,w2,...,wn], then: v+w=[v1+w1,v2+w2,...,vn+wn]
Geometric Interpretation: Geometrically, vector addition can be visualized using the "tip-to-tail" method or the parallelogram rule. Imagine placing the tail of vector w at the tip of vector v. The resulting vector v+w starts at the origin (tail of v) and ends at the new tip of w. Alternatively, if you form a parallelogram with v and w starting from the origin, the diagonal of the parallelogram starting from the origin represents the sum v+w.
Let's visualize this with two 2D vectors, v=[2,1] and w=[1,3]. Their sum is v+w=[2+1,1+3]=[3,4].
The vector v (blue) is added to w (green, solid line starting from v's tip) using the tip-to-tail method. The resultant vector v+w (orange) goes from the origin to the final tip. The dashed green vector shows w starting from the origin.
Similar to addition, vector subtraction involves subtracting corresponding elements. For vectors v and w of the same dimension: v−w=[v1−w1,v2−w2,...,vn−wn]
Geometric Interpretation: Vector subtraction v−w can be thought of as adding the negative of vector w to v, i.e., v+(−w). The vector −w has the same magnitude as w but points in the opposite direction. Geometrically, the vector v−w is the vector that starts at the tip of w and ends at the tip of v, when both v and w start from the origin. This vector represents the displacement from w to v.
Consider v=[4,2] and w=[1,3]. Then v−w=[4−1,2−3]=[3,−1].
The vectors v (blue) and w (green) start from the origin. The vector v−w (orange) represents the displacement from the tip of w to the tip of v.
Scalar multiplication involves multiplying a vector by a single number (a scalar). This operation scales the vector's magnitude and potentially reverses its direction if the scalar is negative. If c is a scalar and v=[v1,v2,...,vn] is a vector: cv=[cv1,cv2,...,cvn]
Geometric Interpretation: Multiplying a vector v by a scalar c results in a new vector pointing in the same direction as v if c>0, and in the opposite direction if c<0. The magnitude (length) of the resulting vector is ∣c∣ times the magnitude of the original vector v. If c=0, the result is the zero vector.
For example, if v=[2,1] and c=2.5: 2.5v=[2.5×2,2.5×1]=[5,2.5] This new vector points in the same direction as v but is 2.5 times longer.
If c=−1: −1v=[−1×2,−1×1]=[−2,−1] This vector has the same length as v but points in the opposite direction.
The original vector v (blue), scaled by 2.5 (orange), and scaled by -1 (purple). Scaling stretches or shrinks the vector and can reverse its direction.
These fundamental operations, addition, subtraction, and scalar multiplication, are essential tools. They allow us to combine and manipulate vector representations of data, forming the basis for more complex algorithms and transformations we'll encounter later, such as calculating centroids, updating model parameters, or projecting data. In the upcoming sections, we'll see how to perform these efficiently using Python's NumPy library.
© 2025 ApX Machine Learning