After completing the installation steps outlined previously, it's important to confirm that TensorFlow is correctly installed and accessible within your Python environment. This verification step helps ensure that subsequent code examples will run as expected and allows you to check for specific hardware acceleration capabilities, like GPU support.
The simplest way to verify your installation is to try importing the TensorFlow library in a Python interpreter or script and printing its version.
Open your Python interpreter (or create a new Python script file, e.g., verify_tf.py
):
import tensorflow as tf
print(f"TensorFlow Version: {tf.__version__}")
Execute this code. If TensorFlow is installed correctly, you should see output similar to this (the exact version number will depend on what you installed):
TensorFlow Version: 2.1x.x
If you encounter an ImportError
or ModuleNotFoundError
, it indicates that TensorFlow is either not installed in the current Python environment or the environment is not activated correctly. Double-check your installation steps and ensure you are running Python from the same environment (e.g., conda environment, virtualenv) where you installed TensorFlow.
Let's perform a basic TensorFlow operation to further confirm functionality. Add the following lines to your script or type them into your interpreter:
# Create a constant tensor
hello = tf.constant("Hello, TensorFlow!")
print(hello.numpy())
# Perform a simple math operation
a = tf.constant(2.0)
b = tf.constant(3.0)
c = a + b
print(f"Result of addition: {c.numpy()}")
Running this should produce output like:
b'Hello, TensorFlow!'
Result of addition: 5.0
The tf.constant()
function creates a tensor, which is the fundamental data structure in TensorFlow. We use .numpy()
here to convert the tensor's value into a NumPy-compatible format for easy printing. Seeing the correct string and the result of the addition (5.0) confirms that TensorFlow's core operations are working.
If you specifically installed the GPU-enabled version of TensorFlow and have a compatible NVIDIA GPU with the necessary drivers and CUDA toolkit installed, you should verify that TensorFlow can detect and utilize the GPU. TensorFlow performs computationally intensive operations significantly faster on a suitable GPU compared to a CPU.
Use the following code to list the physical devices TensorFlow can detect, specifically filtering for GPUs:
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.list_logical_devices('GPU')
print(f"Detected {len(gpus)} Physical GPUs, Configured {len(logical_gpus)} Logical GPUs")
print("GPU Details:")
for i, gpu in enumerate(gpus):
print(f" GPU {i}: Name={gpu.name}, Type={gpu.device_type}")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(f"Error during GPU configuration: {e}")
else:
print("No GPU detected by TensorFlow. Running on CPU.")
# Verify device placement for an operation (optional)
with tf.device('/CPU:0'):
cpu_tensor = tf.random.normal([10, 10])
print("Tensor created on CPU")
if gpus:
try:
with tf.device('/GPU:0'):
gpu_tensor = tf.random.normal([10, 10])
print("Tensor created on GPU:0")
except RuntimeError as e:
print(f"Could not create tensor on GPU: {e}")
Expected Output (with GPU):
If a compatible GPU is found and configured correctly, the output will look something like this:
Detected 1 Physical GPUs, Configured 1 Logical GPUs
GPU Details:
GPU 0: Name=/physical_device:GPU:0, Type=GPU
Tensor created on CPU
Tensor created on GPU:0
The number of GPUs and their names might differ based on your system configuration. The tf.config.experimental.set_memory_growth(gpu, True)
line is often important to prevent TensorFlow from allocating all GPU memory at once, allowing other processes (or even multiple TensorFlow processes) to use the GPU. Seeing the "Tensor created on GPU" message confirms successful GPU operation placement.
Expected Output (without GPU or with configuration issues):
If you installed the CPU-only version, don't have a compatible NVIDIA GPU, or if drivers/CUDA are misconfigured, you will likely see:
No GPU detected by TensorFlow. Running on CPU.
Tensor created on CPU
If you expected GPU support but see this message, revisit the "CPU vs GPU Considerations" section and ensure your NVIDIA drivers, CUDA Toolkit, and cuDNN library versions are compatible with your installed TensorFlow version. Consulting the official TensorFlow installation guides for specific version compatibility matrices is highly recommended.
Successfully running these verification steps confirms your TensorFlow environment is ready. You can now proceed to explore core TensorFlow concepts like tensors and automatic differentiation in the next chapter.
© 2025 ApX Machine Learning