Now that you have followed the steps to set up your development environment, it's time to confirm that Keras and its backend (preferably PyTorch for this course, though TensorFlow is also compatible with Keras 3) are installed correctly and accessible from your Python interpreter or notebook. This verification step is simple but important, ensuring you won't encounter installation-related issues when we start building models.We'll write a short Python script to import Keras and print its version, along with the version of the configured backend.Verification ScriptOpen your preferred Python environment (like a terminal, command prompt, Jupyter Notebook, or IDE) and run the following code:import sys import keras import os # Check Keras version print(f"Keras version: {keras.__version__}") # Keras 3 allows multiple backends (tensorflow, pytorch, jax) # Let's check which backend is configured print(f"Keras backend: {keras.backend.backend()}") # Optionally, check the backend's version directly # Note: This requires the backend library (e.g., torch or tensorflow) to be imported try: if keras.backend.backend() == "torch": import torch print(f"PyTorch version: {torch.__version__}") elif keras.backend.backend() == "tensorflow": import tensorflow as tf print(f"TensorFlow version: {tf.__version__}") # Add checks for other backends like JAX if necessary # elif keras.backend.backend() == "jax": # import jax # print(f"JAX version: {jax.__version__}") except ImportError: print(f"Could not import the configured backend library: {keras.backend.backend()}") except Exception as e: print(f"An error occurred while checking backend version: {e}") # Verify GPU availability (Optional but recommended for Deep Learning) # This check might differ slightly depending on the backend print("\nChecking for GPU availability...") gpu_available = False if keras.backend.backend() == "torch": try: import torch gpu_available = torch.cuda.is_available() if gpu_available: print(f"PyTorch backend: GPU detected. Device name: {torch.cuda.get_device_name(0)}") else: print("PyTorch backend: No GPU detected, will use CPU.") except Exception as e: print(f"PyTorch backend: Error checking GPU: {e}") elif keras.backend.backend() == "tensorflow": try: import tensorflow as tf gpu_devices = tf.config.list_physical_devices('GPU') if gpu_devices: gpu_available = True print(f"TensorFlow backend: GPU detected. Devices: {gpu_devices}") else: print("TensorFlow backend: No GPU detected, will use CPU.") except Exception as e: print(f"TensorFlow backend: Error checking GPU: {e}") else: print(f"GPU check not implemented for backend: {keras.backend.backend()}") Understanding the OutputWhen you run this script, you should see output similar to this (versions may vary):Keras version: 3.x.x Keras backend: torch PyTorch version: 2.x.x Checking for GPU availability... PyTorch backend: GPU detected. Device name: NVIDIA GeForce RTX 3080Or, if you configured TensorFlow as the backend:Keras version: 3.x.x Keras backend: tensorflow TensorFlow version: 2.xx.x Checking for GPU availability... TensorFlow backend: GPU detected. Devices: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]What to look for:Keras Version: Confirms Keras is installed and accessible. Keras 3 is expected.Keras Backend: Shows which backend library (PyTorch, TensorFlow, or JAX) Keras is currently configured to use. For this course, torch is the preferred backend, but tensorflow will also work. You can configure the backend using environment variables (e.g., KERAS_BACKEND=torch) or a configuration file (~/.keras/keras.json). Refer to the Keras documentation for details if you need to switch backends.Backend Version: Displays the version of the active backend library (PyTorch or TensorFlow). This confirms the underlying library is also correctly installed.GPU Availability: This part checks if a compatible GPU is detected and accessible by the backend. While not strictly necessary for running the initial examples, having a GPU significantly accelerates the training of deep learning models, especially for CNNs and larger RNNs covered later. If a GPU is detected, you'll see confirmation; otherwise, it will indicate that computations will run on the CPU.If the script runs without ImportError messages and shows the versions, your environment is ready. If you encounter errors, revisit the installation steps in the previous section ("Setting Up Your Keras Environment"), ensuring all commands completed successfully and that you are running the script within the correct Python environment (e.g., the activated conda environment or virtual environment where you installed the packages). Common issues include forgetting to activate the environment or path problems.With the environment verified, you are now prepared to start defining and building neural networks using Keras in the next chapter.