Alright, let's put the theory into practice. You've learned the concepts behind adding, subtracting, scaling, transposing, and multiplying matrices. Now, we'll see how straightforward it is to perform these essential matrix operations using NumPy. This hands-on approach will solidify your understanding and prepare you for using these operations in machine learning contexts.
First, make sure you have NumPy imported. We'll use the standard alias np
. Let's also define a few matrices to work with throughout these examples.
import numpy as np
# Define matrices A and B
A = np.array([[1, 2, 3],
[4, 5, 6]])
B = np.array([[7, 8, 9],
[10, 11, 12]])
# Define matrix C for multiplication example
C = np.array([[1, 0],
[0, 1],
[1, 1]])
print("Matrix A:")
print(A)
print("\nMatrix B:")
print(B)
print("\nMatrix C:")
print(C)
You should see the definitions of our starting matrices printed out. Notice that A
and B
have the same dimensions (2x3), which is important for addition and subtraction. Matrix C
has dimensions 3x2.
Adding or subtracting matrices in NumPy is as simple as using the standard +
and -
operators. NumPy handles the element-wise operations automatically.
# Matrix Addition (A + B)
matrix_sum = A + B
print("Matrix Sum (A + B):")
print(matrix_sum)
# Matrix Subtraction (A - B)
matrix_diff = A - B
print("\nMatrix Difference (A - B):")
print(matrix_diff)
Result:
Matrix Sum (A + B):
[[ 8 10 12]
[14 16 18]]
Matrix Difference (A - B):
[[-6 -6 -6]
[-6 -6 -6]]
Remember, matrix addition and subtraction require the matrices to have the exact same dimensions. If you try to add or subtract incompatible matrices, NumPy will raise a ValueError
.
Multiplying a matrix by a scalar (a single number) is also straightforward. Use the standard *
operator. NumPy multiplies every element in the matrix by the scalar.
# Scalar Multiplication (3 * A)
scalar = 3
scaled_matrix = scalar * A
print(f"Scalar Multiplication ({scalar} * A):")
print(scaled_matrix)
Result:
Scalar Multiplication (3 * A):
[[ 3 6 9]
[12 15 18]]
Each element in matrix A
has been multiplied by 3.
To transpose a matrix (swap its rows and columns), you can use the .T
attribute or the np.transpose()
function.
# Matrix Transpose using .T
transpose_A = A.T
print("Transpose of A (using .T):")
print(transpose_A)
print("\nShape of A:", A.shape)
print("Shape of A.T:", transpose_A.shape)
# Matrix Transpose using np.transpose()
transpose_B = np.transpose(B)
print("\nTranspose of B (using np.transpose()):")
print(transpose_B)
print("\nShape of B:", B.shape)
print("Shape of B.T:", transpose_B.shape)
Result:
Transpose of A (using .T):
[[1 4]
[2 5]
[3 6]]
Shape of A: (2, 3)
Shape of A.T: (3, 2)
Transpose of B (using np.transpose()):
[[ 7 10]
[ 8 11]
[ 9 12]]
Shape of B: (2, 3)
Shape of B.T: (3, 2)
As expected, the 2x3 matrices A
and B
become 3x2 matrices after transposition.
This is a fundamental operation in linear algebra and machine learning. It's important to remember that matrix multiplication is not element-wise multiplication (which uses the *
operator). For the true matrix product (dot product), use the @
operator or the np.dot()
function.
The @
operator was introduced in Python 3.5 and is generally preferred for its clarity specifically for matrix multiplication.
Let's multiply matrix A
(2x3) by matrix C
(3x2). The inner dimensions match (3 and 3), so the multiplication is valid. The resulting matrix will have dimensions 2x2.
# Matrix Multiplication using @
product_AC = A @ C
print("Matrix Product (A @ C):")
print(product_AC)
print("\nShape of A:", A.shape)
print("Shape of C:", C.shape)
print("Shape of A @ C:", product_AC.shape)
# Matrix Multiplication using np.dot()
product_AC_dot = np.dot(A, C)
print("\nMatrix Product (np.dot(A, C)):")
print(product_AC_dot)
Result:
Matrix Product (A @ C):
[[ 4 3]
[10 6]]
Shape of A: (2, 3)
Shape of C: (3, 2)
Shape of A @ C: (2, 2)
Matrix Product (np.dot(A, C)):
[[ 4 3]
[10 6]]
Both methods yield the same 2x2 result.
What happens if the dimensions are incompatible? Let's try multiplying A
(2x3) by B
(2x3). The inner dimensions (3 and 2) do not match.
# Attempting incompatible multiplication (A @ B)
try:
incompatible_product = A @ B
print(incompatible_product)
except ValueError as e:
print("\nError during incompatible multiplication (A @ B):")
print(e)
Result:
Error during incompatible multiplication (A @ B):
matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 3)
NumPy correctly raises a ValueError
, indicating the dimension mismatch prevents the multiplication.
Remember that matrix multiplication is generally not commutative. Let's calculate C×A (dimensions 3x2 and 2x3 are compatible, result is 3x3) and compare it to A×C (which was 2x2).
# Calculate C @ A (dimensions 3x2 @ 2x3 -> 3x3)
product_CA = C @ A
print("\nMatrix Product (C @ A):")
print(product_CA)
print("\nShape of C @ A:", product_CA.shape)
print("\nIs A @ C == C @ A?", "Not applicable due to different shapes.")
Result:
Matrix Product (C @ A):
[[ 1 2 3]
[ 4 5 6]
[ 5 7 9]]
Shape of C @ A: (3, 3)
Is A @ C == C @ A? Not applicable due to different shapes.
Clearly, A×C and C×A are different. Even when the shapes allow both multiplications and result in matrices of the same size, the results are usually different.
Here's a quick reference for the NumPy operations we've practiced:
matrix1 + matrix2
matrix1 - matrix2
scalar * matrix
matrix.T
or np.transpose(matrix)
matrix1 @ matrix2
or np.dot(matrix1, matrix2)
These fundamental operations form the building blocks for many calculations in machine learning algorithms. Practice them until they feel comfortable. You'll be using them frequently as you work with data represented in matrix form.
© 2025 ApX Machine Learning