Okay, let's put theory into practice. Having defined eigenvalues (λ) and eigenvectors (v) through the relationship Av=λv, and understanding that eigen-decomposition allows representing a matrix A as A=PDP−1 (where P's columns are eigenvectors and D is a diagonal matrix of eigenvalues), we can now use Python's NumPy library to perform these calculations efficiently. This is a fundamental skill for applying techniques like Principal Component Analysis (PCA).
First, ensure you have NumPy installed and import it:
import numpy as np
Let's work with a simple 2x2 symmetric matrix. Symmetric matrices have useful properties, such as always having real eigenvalues and being diagonalizable.
# Define a symmetric matrix
A = np.array([[4, 2],
[2, 1]])
print("Our matrix A:")
print(A)
NumPy's linear algebra module, linalg
, provides the eig
function specifically for computing eigenvalues and eigenvectors.
# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("\nEigenvalues:")
print(eigenvalues)
print("\nEigenvectors (each column is an eigenvector):")
print(eigenvectors)
Output Explanation:
eigenvalues
: This is a 1D NumPy array containing the eigenvalues (λ) of the matrix A
. In this case, we get [5. 0.]
.eigenvectors
: This is a 2D NumPy array where each column represents an eigenvector corresponding to the eigenvalue at the same index in the eigenvalues
array.
eigenvectors[:, 0]
corresponds to eigenvalues[0]
.eigenvectors[:, 1]
corresponds to eigenvalues[1]
.NumPy typically returns normalized eigenvectors (vectors with a length of 1).
Let's verify the fundamental relationship Av=λv for the first eigenvalue-eigenvector pair.
# Extract the first eigenvalue and corresponding eigenvector
lambda_1 = eigenvalues[0]
v_1 = eigenvectors[:, 0] # First column
# Calculate A * v_1
Av1 = A @ v_1 # Using the @ operator for matrix multiplication
# Calculate lambda_1 * v_1
lambda1_v1 = lambda_1 * v_1
print("\nVerifying for the first eigenvalue/eigenvector:")
print(f"lambda_1: {lambda_1:.4f}")
print(f"v_1: {v_1}")
print(f"A @ v_1: {Av1}")
print(f"lambda_1 * v_1: {lambda1_v1}")
# Check if Av1 and lambda1_v1 are close (due to floating-point arithmetic)
print(f"Are Av1 and lambda1*v1 numerically close? {np.allclose(Av1, lambda1_v1)}")
You should see that the results for A @ v_1
and lambda_1 * v_1
are indeed very close, confirming the relationship. You can perform a similar check for the second eigenvalue and eigenvector. Notice that for λ=0, Av results in the zero vector, as expected.
Now, let's reconstruct the original matrix A using its eigenvalues and eigenvectors. Recall the formula A=PDP−1, where:
# Construct the matrix P from eigenvectors
P = eigenvectors
# Construct the diagonal matrix D from eigenvalues
D = np.diag(eigenvalues)
# Calculate the inverse of P
# For orthogonal matrices (like eigenvectors of symmetric matrices), P_inv = P.T
# But we'll use np.linalg.inv for the general case.
try:
P_inv = np.linalg.inv(P)
# Reconstruct the original matrix A
A_reconstructed = P @ D @ P_inv
print("\nReconstructing A using P D P_inv:")
print("Matrix P (Eigenvectors):")
print(P)
print("\nMatrix D (Diagonal Eigenvalues):")
print(D)
print("\nMatrix P_inv (Inverse of P):")
print(P_inv)
print("\nReconstructed Matrix (P @ D @ P_inv):")
print(A_reconstructed)
# Verify the reconstruction
print(f"\nIs the reconstructed matrix close to the original A? {np.allclose(A, A_reconstructed)}")
except np.linalg.LinAlgError:
print("\nMatrix P is singular and cannot be inverted. A might not be diagonalizable with this method.")
The output should show that the reconstructed matrix is numerically very close to the original matrix A
. This confirms the eigen-decomposition.
Eigenvectors represent directions that remain unchanged (except for scaling) when the transformation represented by matrix A is applied. Let's visualize this for our matrix A
. We'll transform a standard basis vector e1=[1,0] and compare its transformation with that of the first eigenvector v1.
# Define standard basis vector e1 and the first eigenvector v1
e1 = np.array([1, 0])
# v1 is already defined from previous steps
# Apply the transformation A
Ae1 = A @ e1
Av1 = A @ v1 # This should be lambda_1 * v1
# Prepare data for plotting
origin = np.array([[0, 0], [0, 0], [0, 0], [0, 0]]) # origins for arrows
vectors = np.array([e1, Ae1, v_1, Av1]) # vectors
The plot shows the original vectors (solid lines) and their transformations by matrix A (dashed lines). The blue vector e1 is rotated and scaled. The red vector v1 (an eigenvector) is only scaled along its original direction by a factor equal to its eigenvalue (λ1≈5). Its transformed version Av1 lies on the same line.
np.linalg.eig
might still compute eigenvalues and vectors, but P would be singular (non-invertible). Symmetric matrices, however, are always diagonalizable.A
is not symmetric, it might have complex eigenvalues and eigenvectors. np.linalg.eig
handles this correctly, returning arrays with complex data types if necessary.This practical exercise demonstrates how NumPy simplifies the calculation of eigenvalues, eigenvectors, and the verification of eigen-decomposition. Understanding these computations is important for grasping how algorithms like PCA leverage the inherent structure of data revealed by these special vectors and scalars.
© 2025 ApX Machine Learning