The relationship $Av = \lambda v$ defines eigenvalues ($\lambda$) and eigenvectors ($v$). Eigen-decomposition represents a matrix $A$ as $A = PDP^{-1}$, where $P$'s columns are eigenvectors and $D$ is a diagonal matrix of eigenvalues. Python's NumPy library can be used to perform these calculations efficiently. This is a fundamental skill for applying techniques like Principal Component Analysis (PCA).Setting UpFirst, ensure you have NumPy installed and import it:import numpy as npDefining a MatrixLet's work with a simple 2x2 symmetric matrix. Symmetric matrices have useful properties, such as always having real eigenvalues and being diagonalizable.# Define a symmetric matrix A = np.array([[4, 2], [2, 1]]) print("Our matrix A:") print(A)Calculating Eigenvalues and Eigenvectors with NumPyNumPy's linear algebra module, linalg, provides the eig function specifically for computing eigenvalues and eigenvectors.# Calculate eigenvalues and eigenvectors eigenvalues, eigenvectors = np.linalg.eig(A) print("\nEigenvalues:") print(eigenvalues) print("\nEigenvectors (each column is an eigenvector):") print(eigenvectors)Output Explanation:eigenvalues: This is a 1D NumPy array containing the eigenvalues ($\lambda$) of the matrix A. In this case, we get [5. 0.].eigenvectors: This is a 2D NumPy array where each column represents an eigenvector corresponding to the eigenvalue at the same index in the eigenvalues array.The first column eigenvectors[:, 0] corresponds to eigenvalues[0].The second column eigenvectors[:, 1] corresponds to eigenvalues[1].NumPy typically returns normalized eigenvectors (vectors with a length of 1).Verifying the Eigenvalue Equation ($Av = \lambda v$)Let's verify the fundamental relationship $Av = \lambda v$ for the first eigenvalue-eigenvector pair.# Extract the first eigenvalue and corresponding eigenvector lambda_1 = eigenvalues[0] v_1 = eigenvectors[:, 0] # First column # Calculate A * v_1 Av1 = A @ v_1 # Using the @ operator for matrix multiplication # Calculate lambda_1 * v_1 lambda1_v1 = lambda_1 * v_1 print("\nVerifying for the first eigenvalue/eigenvector:") print(f"lambda_1: {lambda_1:.4f}") print(f"v_1: {v_1}") print(f"A @ v_1: {Av1}") print(f"lambda_1 * v_1: {lambda1_v1}") # Check if Av1 and lambda1_v1 are close (due to floating-point arithmetic) print(f"Are Av1 and lambda1*v1 numerically close? {np.allclose(Av1, lambda1_v1)}")You should see that the results for A @ v_1 and lambda_1 * v_1 are indeed very close, confirming the relationship. You can perform a similar check for the second eigenvalue and eigenvector. Notice that for $\lambda = 0$, $Av$ results in the zero vector, as expected.Reconstructing the Matrix using Eigen-decomposition ($A = PDP^{-1}$)Now, let's reconstruct the original matrix $A$ using its eigenvalues and eigenvectors. Recall the formula $A = PDP^{-1}$, where:$P$ is the matrix whose columns are the eigenvectors.$D$ is the diagonal matrix containing the eigenvalues on its diagonal.$P^{-1}$ is the inverse of the matrix $P$.# Construct the matrix P from eigenvectors P = eigenvectors # Construct the diagonal matrix D from eigenvalues D = np.diag(eigenvalues) # Calculate the inverse of P # For orthogonal matrices (like eigenvectors of symmetric matrices), P_inv = P.T # But we'll use np.linalg.inv for the general case. try: P_inv = np.linalg.inv(P) # Reconstruct the original matrix A A_reconstructed = P @ D @ P_inv print("\nReconstructing A using P D P_inv:") print("Matrix P (Eigenvectors):") print(P) print("\nMatrix D (Diagonal Eigenvalues):") print(D) print("\nMatrix P_inv (Inverse of P):") print(P_inv) print("\nReconstructed Matrix (P @ D @ P_inv):") print(A_reconstructed) # Verify the reconstruction print(f"\nIs the reconstructed matrix close to the original A? {np.allclose(A, A_reconstructed)}") except np.linalg.LinAlgError: print("\nMatrix P is singular and cannot be inverted. A might not be diagonalizable with this method.") The output should show that the reconstructed matrix is numerically very close to the original matrix A. This confirms the eigen-decomposition.Visualizing the TransformationEigenvectors represent directions that remain unchanged (except for scaling) when the transformation represented by matrix $A$ is applied. Let's visualize this for our matrix A. We'll transform a standard basis vector $e_1 = [1, 0]$ and compare its transformation with that of the first eigenvector $v_1$.# Define standard basis vector e1 and the first eigenvector v1 e1 = np.array([1, 0]) # v1 is already defined from previous steps # Apply the transformation A Ae1 = A @ e1 Av1 = A @ v1 # This should be lambda_1 * v1 # Prepare data for plotting origin = np.array([[0, 0], [0, 0], [0, 0], [0, 0]]) # origins for arrows vectors = np.array([e1, Ae1, v_1, Av1]) # vectors {"layout": {"title": "Effect of Transformation A on Vectors", "xaxis": {"range": [-1, 5.5], "title": "x1", "zeroline": true}, "yaxis": {"range": [-1, 3], "title": "x2", "scaleanchor": "x", "scaleratio": 1, "zeroline": true}, "annotations": [{"x": 1, "y": 0, "ax": 0, "ay": -40, "text": "e1", "showarrow": true, "arrowhead": 2}, {"x": 4, "y": 2, "ax": 0, "ay": 40, "text": "A*e1", "showarrow": true, "arrowhead": 2}, {"x": 0.8944, "y": 0.4472, "ax": -60, "ay": -10, "text": "v1", "showarrow": true, "arrowhead": 2, "font": {"color": "#f03e3e"}}, {"x": 4.4721, "y": 2.2360, "ax": 0, "ay": -40, "text": "A*v1 (scaled v1)", "showarrow": true, "arrowhead": 2, "font": {"color": "#f03e3e"}}], "showlegend": false}, "data": [{"type": "scatter", "x": [0, 1], "y": [0, 0], "mode": "lines+markers", "line": {"color": "#1c7ed6", "width": 2}}, {"type": "scatter", "x": [0, 4], "y": [0, 2], "mode": "lines+markers", "line": {"color": "#1c7ed6", "width": 2, "dash": "dash"}}, {"type": "scatter", "x": [0, 0.8944], "y": [0, 0.4472], "mode": "lines+markers", "line": {"color": "#f03e3e", "width": 2}}, {"type": "scatter", "x": [0, 4.4721], "y": [0, 2.2360], "mode": "lines+markers", "line": {"color": "#f03e3e", "width": 2, "dash": "dash"}}]}The plot shows the original vectors (solid lines) and their transformations by matrix A (dashed lines). The blue vector $e_1$ is rotated and scaled. The red vector $v_1$ (an eigenvector) is only scaled along its original direction by a factor equal to its eigenvalue ($\lambda_1 \approx 5$). Its transformed version $A v_1$ lies on the same line.Important NotesNumerical Stability: Direct calculation of $P^{-1}$ can sometimes be numerically unstable, especially for large matrices or matrices that are close to singular (non-invertible). Methods like SVD (covered in the next chapter) are often preferred in practice for related tasks.Non-Diagonalizable Matrices: Not all square matrices can be decomposed as $PDP^{-1}$. This occurs if the matrix does not have a full set of linearly independent eigenvectors. NumPy's np.linalg.eig might still compute eigenvalues and vectors, but $P$ would be singular (non-invertible). Symmetric matrices, however, are always diagonalizable.Complex Eigenvalues/Eigenvectors: If the matrix A is not symmetric, it might have complex eigenvalues and eigenvectors. np.linalg.eig handles this correctly, returning arrays with complex data types if necessary.This practical exercise demonstrates how NumPy simplifies the calculation of eigenvalues, eigenvectors, and the verification of eigen-decomposition. Understanding these computations is important for grasping how algorithms like PCA leverage the inherent structure of data revealed by these special vectors and scalars.