This course has been archived: A newer version with updated syllabus and improved content is now available.
The matrix inverse (A−1) holds a theoretical role in solving systems of linear equations of the form Ax=b. Here, the computation of the inverse using NumPy is demonstrated. While directly calculating the inverse is not always the preferred method for solving systems in practice, knowing how to compute it is a fundamental skill.
numpy.linalg ModuleNumPy, the foundation library for numerical computing in Python, includes a submodule dedicated to linear algebra operations: numpy.linalg. This module contains functions for matrix decomposition, eigenvalue calculation, solving linear systems, and, importantly for us right now, calculating matrix inverses.
inv()To compute the inverse of a square matrix, we use the numpy.linalg.inv() function. It takes a square matrix (represented as a NumPy array) as input and returns its inverse, also as a NumPy array.
Remember, only square matrices can have inverses, and even then, only if they are non-singular (invertible).
Let's try an example. Consider the following 2x2 matrix A:
A=(4276)We can represent this in NumPy and calculate its inverse:
import numpy as np
# Define matrix A
A = np.array([[4, 7],
[2, 6]])
print("Matrix A:")
print(A)
# Calculate the inverse of A
try:
A_inv = np.linalg.inv(A)
print("\nInverse of A (A_inv):")
print(A_inv)
except np.linalg.LinAlgError as e:
print(f"\nCould not compute inverse: {e}")
Executing this code will output the original matrix A and its calculated inverse Ainv.
A defining property of the matrix inverse A−1 is that when multiplied by the original matrix A, it yields the identity matrix I. That is:
AA−1=A−1A=IWe can verify this using NumPy's matrix multiplication capabilities. Recall that the @ operator performs matrix multiplication (or you can use np.dot()):
# Verify A * A_inv
identity_check_1 = A @ A_inv
print("\nVerification (A @ A_inv):")
print(identity_check_1)
# Verify A_inv * A
identity_check_2 = A_inv @ A
print("\nVerification (A_inv @ A):")
print(identity_check_2)
# Create the expected identity matrix
I = np.identity(A.shape[0]) # A.shape[0] gives the number of rows (which is 2 here)
print("\nIdentity matrix I:")
print(I)
You should see that both identity_check_1 and identity_check_2 produce a matrix very close to the 2x2 identity matrix:
You might notice that the resulting matrices aren't exactly the identity matrix. Instead of perfect zeros, you might see very small numbers like 2.22044605e-16 (which is 2.22×10−16). This is a normal consequence of how computers handle calculations with non-integer numbers, known as floating-point arithmetic. These tiny differences are usually negligible.
If you need to programmatically check if two matrices are equal within a certain tolerance, NumPy provides the np.allclose() function:
# Check if A @ A_inv is close to the identity matrix I
are_close = np.allclose(A @ A_inv, np.identity(A.shape[0]))
print(f"\nIs A @ A_inv numerically close to I? {are_close}") # Output should be True
What happens if we try to compute the inverse of a matrix that doesn't have one (a singular matrix)? Let's try with a matrix where one column is a multiple of another, making it singular:
B=(1224)# Define a singular matrix B
B = np.array([[1, 2],
[2, 4]])
print("\nMatrix B (Singular):")
print(B)
# Attempt to calculate the inverse of B
try:
B_inv = np.linalg.inv(B)
print("\nInverse of B:")
print(B_inv)
except np.linalg.LinAlgError as e:
print(f"\nCould not compute inverse of B: {e}")
When you run this code, NumPy will detect that the matrix is singular and raise a LinAlgError: Singular matrix. This is NumPy's way of telling you that the inverse does not exist for the given matrix B. This behavior correctly reflects the mathematical properties discussed in the "Conditions for Invertibility" section.
Calculating the inverse is a fundamental operation, and np.linalg.inv() provides a straightforward way to do it. However, keep in mind that for solving systems of equations Ax=b, using the inverse directly (x=A−1b) can sometimes be less numerically stable and efficient than using dedicated solvers, which we will explore next.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with