When matrices represent linear transformations, they change vectors by rotating, scaling, or shearing them. A fascinating aspect of these transformations involves certain vectors that, when transformed by a specific matrix, do not change their direction. These vectors might get longer, shorter, or even reverse direction, but they consistently remain aligned along their original line. Such distinct vectors and their associated scaling factors constitute primary properties of the matrix itself.
Let's formalize this. For a given square matrix A of size n×n, we are looking for non-zero vectors x and scalars λ that satisfy the following equation:
Ax=λx
This equation is the heart of the matter. Let's break down its components:
In simple terms, applying the matrix A to its eigenvector x has the same effect as just scaling x by the eigenvalue λ. The matrix multiplication Ax results in a vector that is parallel to the original vector x.
The word "eigen" comes from German and means "own" or "characteristic". Eigenvalues and eigenvectors are indeed characteristic properties intrinsic to the matrix A, revealing fundamental aspects of the transformation it represents.
An eigenvalue λ and its corresponding eigenvector x form an eigenpair (λ,x). A matrix can have multiple distinct eigenvalues, and each eigenvalue can be associated with one or more linearly independent eigenvectors. However, a single non-zero eigenvector x corresponds to exactly one eigenvalue.
Consider a simple example. Let A be the identity matrix I=(1001). Applying A to any vector x=(x1x2) gives:
Ax=Ix=(1001)(x1x2)=(x1x2)=1x
Here, Ax=1x. This fits the definition Ax=λx with λ=1. This means that for the identity matrix, every non-zero vector is an eigenvector with an eigenvalue of 1. This makes sense geometrically: the identity transformation doesn't change any vector.
Now consider a scaling matrix B=(3003). Applying B to any vector x:
Bx=(3003)(x1x2)=(3x13x2)=3(x1x2)=3x
In this case, Bx=3x. Again, every non-zero vector is an eigenvector, but this time the eigenvalue is λ=3. The transformation uniformly scales every vector by a factor of 3, preserving all directions.
These examples are straightforward. For most matrices, only specific directions (the eigenvectors) are preserved, and finding them requires more systematic methods. The next sections will explore the geometric meaning of eigenvectors more deeply and introduce the standard techniques for calculating eigenvalues and eigenvectors, starting with the characteristic equation.
Cleaner syntax. Built-in debugging. Production-ready from day one.
Built for the AI systems behind ApX Machine Learning
Was this section helpful?
© 2026 ApX Machine LearningEngineered with