Matrices act as functions that transform vectors. When you multiply a vector by a matrix, the resulting vector usually changes both its magnitude (length) and its direction. However, for every square matrix, there exist specific vectors that behave differently. When the matrix operates on these vectors, the direction remains unchanged. The vector may stretch or shrink, but it stays on the same span.These special vectors are called eigenvectors. The factor by which they stretch or shrink is called the eigenvalue.This concept is the mathematical engine behind quantum measurement. In the previous section, we discussed unitary matrices as operations. When we measure a quantum system, we are essentially asking the system to collapse into one of its eigenvectors. The value we read out from that measurement is the corresponding eigenvalue.The Characteristic EquationThe relationship between a matrix $A$, an eigenvector $v$, and an eigenvalue $\lambda$ (lambda) is defined by the following equation:$$ Av = \lambda v $$Here is what this equation tells us:$A$ is the transformation matrix (the operator).$v$ is the non-zero vector (the eigenvector).$\lambda$ is a scalar value (the eigenvalue).On the left side, we have matrix-vector multiplication. On the right side, we have scalar-vector multiplication. This implies that the action of matrix $A$ on vector $v$ is equivalent to simply scaling $v$ by the factor $\lambda$.digraph G { rankdir=LR; node [style=filled, shape=circle, fixedsize=true, width=0.8, fontname="Helvetica"]; edge [penwidth=2]; subgraph cluster_0 { style=invis; v [label="v", fillcolor="#a5d8ff", color="#1c7ed6"]; Av [label="Av", fillcolor="#b2f2bb", color="#37b24d"]; v -> Av [label="Matrix A Applied", color="#adb5bd", fontname="Helvetica", fontsize=10]; } subgraph cluster_1 { style=invis; desc [shape=plaintext, label="Direction preserved\nMagnitude scaled by λ", fontname="Helvetica"]; } }The transformation of vector v by matrix A results in a vector pointing in the same direction.Physical Interpretation in Quantum SystemsIn classical machine learning, eigenvectors help reduce dimensionality (like in Principal Component Analysis). In quantum computing, their role is distinct and physical.Every measurable property in a quantum system (like energy, spin, or position) is associated with an operator (a matrix). We call these Observables. When you perform a measurement:The system collapses to one of the eigenvectors of that operator.The result you see on your measuring device is the eigenvalue corresponding to that eigenvector.This explains why quantum outcomes are quantized. If an operator only has eigenvalues of $+1$ and $-1$, you will never measure a value of $0.5$. You will only ever observe $+1$ or $-1$.A Concrete Example: The Pauli-Z OperatorLet us examine the Pauli-Z matrix, which is often used to measure the state of a qubit in the computational basis ($0$ and $1$). The matrix is defined as:$$ Z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} $$We want to see how this matrix acts on the standard basis vectors. First, let us look at the state $|0\rangle$, represented by the vector $\begin{pmatrix} 1 \ 0 \end{pmatrix}$.$$ Z|0\rangle = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} 1 \cdot 1 + 0 \cdot 0 \ 0 \cdot 1 + (-1) \cdot 0 \end{pmatrix} = \begin{pmatrix} 1 \ 0 \end{pmatrix} $$The output is identical to the input. $$ Z|0\rangle = 1|0\rangle $$ The vector $|0\rangle$ is an eigenvector of the $Z$ matrix with an eigenvalue of $1$.Now consider the state $|1\rangle$, represented by $\begin{pmatrix} 0 \ 1 \end{pmatrix}$.$$ Z|1\rangle = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix} \begin{pmatrix} 0 \ 1 \end{pmatrix} = \begin{pmatrix} 0 \ -1 \end{pmatrix} = -1 \begin{pmatrix} 0 \ 1 \end{pmatrix} $$Here, the vector remains on the same axis but points in the opposite direction. $$ Z|1\rangle = -1|1\rangle $$ The vector $|1\rangle$ is an eigenvector of the $Z$ matrix with an eigenvalue of $-1$.If we use the $Z$ matrix to measure a qubit, the only possible outcomes are $1$ and $-1$. The state $|0\rangle$ corresponds to the outcome $1$, and the state $|1\rangle$ corresponds to the outcome $-1$.Calculating with NumPyWhile solving characteristic equations by hand is useful for simple $2 \times 2$ matrices, we rely on computational tools for larger systems. Python's NumPy library has a built-in linear algebra module linalg that handles this efficiently.We can use np.linalg.eig() to compute the eigenvalues and eigenvectors of a matrix.import numpy as np # Define the Pauli-Z matrix Z = np.array([[1, 0], [0, -1]]) # Calculate eigenvalues and eigenvectors eigenvalues, eigenvectors = np.linalg.eig(Z) print("Eigenvalues:") print(eigenvalues) print("\nEigenvectors:") print(eigenvectors)Running this code yields the following output:Eigenvalues: [ 1. -1.] Eigenvectors: [[1. 0.] [0. 1.]]The eigenvalues array contains 1 and -1. The eigenvectors array contains the corresponding vectors as columns. The first column [1, 0] corresponds to the first eigenvalue 1. The second column [0, 1] corresponds to the second eigenvalue -1.Why This Matters for AlgorithmsUnderstanding eigenvalues is necessary for understanding how quantum algorithms extract information. Many quantum algorithms, such as Quantum Phase Estimation (a component of Shor's algorithm for factoring), rely on manipulating the system so that the answer to the problem is encoded in the eigenvalues of a unitary operator.When a qubit is in a state of superposition (a mix of basis states), it is not an eigenvector of the measurement operator. In this scenario, the measurement outcome is probabilistic. The system will randomly snap to one of the eigenvectors, and the probability depends on the overlap between the current state and those eigenvectors. We will verify this behavior in the next chapter when we build circuits involving superposition and measurement.