Systems of linear equations can be represented as $Ax = b$. The identity matrix and the matrix inverse ($A^{-1}$) are important concepts in solving these systems. Here, NumPy is used to find the solution vector $x$.We'll explore two main ways to do this in NumPy:Calculating the inverse directly ($A^{-1}$) and then computing $x = A^{-1}b$. This method closely follows the mathematical definition but is often less efficient and can be numerically less stable for computers, especially for large systems. It's useful to understand.Using NumPy's dedicated solver function (np.linalg.solve). This is the recommended approach in practice. It's generally faster and more numerically reliable as it uses sophisticated algorithms (often based on matrix decompositions like LU decomposition) that avoid explicitly forming the inverse.Let's work through an example. Consider the following system of linear equations:$$ 2x_1 + x_2 - x_3 = 8 \ -3x_1 - x_2 + 2x_3 = -11 \ -2x_1 + x_2 + 2x_3 = -3 $$Our goal is to find the values of $x_1$, $x_2$, and $x_3$ that satisfy all three equations simultaneously.Setting up the System in NumPyFirst, we need to represent this system in the matrix form $Ax = b$.The matrix $A$ contains the coefficients of the variables: $$ A = \begin{bmatrix} 2 & 1 & -1 \ -3 & -1 & 2 \ -2 & 1 & 2 \end{bmatrix} $$The vector $b$ contains the constant terms on the right-hand side: $$ b = \begin{bmatrix} 8 \ -11 \ -3 \end{bmatrix} $$And the vector $x$ contains the unknown variables we want to solve for: $$ x = \begin{bmatrix} x_1 \ x_2 \ x_3 \end{bmatrix} $$Let's create $A$ and $b$ as NumPy arrays:import numpy as np # Define the coefficient matrix A A = np.array([ [2, 1, -1], [-3, -1, 2], [-2, 1, 2] ]) # Define the constant vector b # We use reshape(-1, 1) to make it an explicit column vector (3 rows, 1 column) # Although NumPy often handles broadcasting, being explicit is good practice. b = np.array([8, -11, -3]).reshape(-1, 1) print("Matrix A:\n", A) print("\nVector b:\n", b)Method 1: Solving using the Matrix InverseMathematically, if $A$ is invertible, we can find $x$ by calculating $x = A^{-1}b$. Let's try this using NumPy's np.linalg.inv() function to find the inverse of $A$, and then perform matrix multiplication using the @ operator.# Calculate the inverse of A try: A_inv = np.linalg.inv(A) print("Inverse of A (A_inv):\n", A_inv) # Calculate x = A_inv * b # Using the @ operator for matrix multiplication (dot product) x_inverse_method = A_inv @ b print("\nSolution vector x (using inverse):\n", x_inverse_method) except np.linalg.LinAlgError: print("\nMatrix A is singular (not invertible). Cannot solve using inverse.") Executing this code should give you the inverse matrix $A^{-1}$ and the resulting solution vector $x$. This approach directly mirrors the mathematical formula $x = A^{-1}b$. However, calculating the inverse can be computationally expensive and prone to floating-point inaccuracies.Method 2: Solving using np.linalg.solve (Recommended)NumPy provides a more direct, efficient, and numerically stable function: np.linalg.solve(A, b). This function solves the system $Ax = b$ for $x$ without explicitly computing the inverse of $A$.# Solve the system Ax = b directly using np.linalg.solve try: x_solve_method = np.linalg.solve(A, b) print("\nSolution vector x (using np.linalg.solve):\n", x_solve_method) except np.linalg.LinAlgError: print("\nMatrix A is singular (not invertible). np.linalg.solve cannot find a unique solution.") You should find that x_solve_method gives the same result as x_inverse_method (perhaps with tiny floating-point differences). For most practical purposes, np.linalg.solve is the preferred function.Verifying the SolutionIt's always a good idea to check if our solution is correct. We can do this by plugging the calculated $x$ back into the original equation $Ax = b$ and seeing if the result equals $b$.Let's use the solution obtained from np.linalg.solve:# Use the solution obtained from np.linalg.solve x_solution = x_solve_method # Or use x_inverse_method # Calculate A * x b_calculated = A @ x_solution print("\nOriginal vector b:\n", b) print("\nCalculated A @ x:\n", b_calculated) # Check if the calculated b is close to the original b # Use np.allclose for comparing floating-point numbers # It checks if two arrays are element-wise equal within a given tolerance. are_close = np.allclose(b_calculated, b) print(f"\nIs the calculated Ax close to the original b? {are_close}")If are_close is True, it confirms that our solution vector $x$ correctly satisfies the system of equations $Ax=b$, within a small tolerance for floating-point arithmetic.Handling Non-Invertible (Singular) MatricesWhat happens if the matrix $A$ is singular (i.e., it doesn't have an inverse)? This corresponds to a system of equations that either has no solutions or infinitely many solutions. Both np.linalg.inv() and np.linalg.solve() will raise a LinAlgError in this case.Let's try with a singular matrix:# Example of a singular matrix (column 3 is column 1 + column 2) A_singular = np.array([ [1, 2, 3], [4, 5, 9], [7, 8, 15] ]) # A dummy b vector b_dummy = np.array([1, 1, 1]).reshape(-1, 1) print("\nTesting with a singular matrix A_singular:\n", A_singular) try: x_singular = np.linalg.solve(A_singular, b_dummy) print("\nSolution (Singular Case):\n", x_singular) # This line won't be reached except np.linalg.LinAlgError as e: print(f"\nError solving with singular matrix: {e}") try: A_singular_inv = np.linalg.inv(A_singular) print("\nInverse (Singular Case):\n", A_singular_inv) # This line won't be reached except np.linalg.LinAlgError as e: print(f"\nError inverting singular matrix: {e}")As expected, NumPy correctly identifies that the matrix is singular and raises an error, indicating that a unique solution cannot be found using these standard methods.In this hands-on section, you learned how to take a system of linear equations, represent it using NumPy arrays $A$ and $b$, and solve for the unknown vector $x$. You practiced two methods: using the explicit inverse (np.linalg.inv) and using the direct solver (np.linalg.solve), understanding that the latter is generally preferred for efficiency and numerical stability. You also learned how to verify your solution and saw how NumPy handles cases where a unique solution doesn't exist due to a singular matrix.