Matrices act as functions that transform vectors. When you multiply a vector by a matrix, the resulting vector usually changes both its magnitude (length) and its direction. However, for every square matrix, there exist specific vectors that behave differently. When the matrix operates on these vectors, the direction remains unchanged. The vector may stretch or shrink, but it stays on the same span.
These special vectors are called eigenvectors. The factor by which they stretch or shrink is called the eigenvalue.
This concept is the mathematical engine behind quantum measurement. In the previous section, we discussed unitary matrices as operations. When we measure a quantum system, we are essentially asking the system to collapse into one of its eigenvectors. The value we read out from that measurement is the corresponding eigenvalue.
The relationship between a matrix , an eigenvector , and an eigenvalue (lambda) is defined by the following equation:
Here is what this equation tells us:
On the left side, we have matrix-vector multiplication. On the right side, we have scalar-vector multiplication. This implies that the action of matrix on vector is equivalent to simply scaling by the factor .
The transformation of vector v by matrix A results in a vector pointing in the same direction.
In classical machine learning, eigenvectors help reduce dimensionality (like in Principal Component Analysis). In quantum computing, their role is distinct and physical.
Every measurable property in a quantum system (like energy, spin, or position) is associated with an operator (a matrix). We call these Observables. When you perform a measurement:
This explains why quantum outcomes are quantized. If an operator only has eigenvalues of and , you will never measure a value of . You will only ever observe or .
Let us examine the Pauli-Z matrix, which is often used to measure the state of a qubit in the computational basis ( and ). The matrix is defined as:
We want to see how this matrix acts on the standard basis vectors. First, let us look at the state , represented by the vector .
The output is identical to the input.
The vector is an eigenvector of the matrix with an eigenvalue of .
Now consider the state , represented by .
Here, the vector remains on the same axis but points in the opposite direction.
The vector is an eigenvector of the matrix with an eigenvalue of .
If we use the matrix to measure a qubit, the only possible outcomes are and . The state corresponds to the outcome , and the state corresponds to the outcome .
While solving characteristic equations by hand is useful for simple matrices, we rely on computational tools for larger systems. Python's NumPy library has a built-in linear algebra module linalg that handles this efficiently.
We can use np.linalg.eig() to compute the eigenvalues and eigenvectors of a matrix.
import numpy as np
# Define the Pauli-Z matrix
Z = np.array([[1, 0],
[0, -1]])
# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(Z)
print("Eigenvalues:")
print(eigenvalues)
print("\nEigenvectors:")
print(eigenvectors)
Running this code yields the following output:
Eigenvalues:
[ 1. -1.]
Eigenvectors:
[[1. 0.]
[0. 1.]]
The eigenvalues array contains 1 and -1. The eigenvectors array contains the corresponding vectors as columns. The first column [1, 0] corresponds to the first eigenvalue 1. The second column [0, 1] corresponds to the second eigenvalue -1.
Understanding eigenvalues is necessary for understanding how quantum algorithms extract information. Many quantum algorithms, such as Quantum Phase Estimation (a component of Shor's algorithm for factoring), rely on manipulating the system so that the answer to the problem is encoded in the eigenvalues of a unitary operator.
When a qubit is in a state of superposition (a mix of basis states), it is not an eigenvector of the measurement operator. In this scenario, the measurement outcome is probabilistic. The system will randomly snap to one of the eigenvectors, and the probability depends on the overlap between the current state and those eigenvectors. We will verify this behavior in the next chapter when we build circuits involving superposition and measurement.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with