To check if TensorFlow is using GPU acceleration within the Python shell, you can use the following methods:
- Import the TensorFlow Library: First, ensure TensorFlow is installed, then import it in the Python shell.
pythonimport tensorflow as tf
- Check Available Devices:
Use the
tf.config.list_physical_devices()function to list all available physical devices and verify if a GPU is present.
pythonprint(tf.config.list_physical_devices())
This will output a list similar to the following, allowing you to confirm the presence of GPU devices:
shell[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
If the list includes entries with 'GPU', it indicates TensorFlow can access the GPU and may utilize it for acceleration.
- Verify Default GPU Usage: TensorFlow typically automatically selects the GPU (if available) as the preferred device for executing operations. You can confirm whether operations run on the GPU by executing a simple operation with logging enabled.
pythontf.debugging.set_log_device_placement(True) a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) c = tf.matmul(a, b) print(c)
When set_log_device_placement is set to True, TensorFlow prints the device used for each operation. If the output includes references to the GPU (e.g., Executing op MatMul on /device:GPU:0), it confirms the matmul operation is executed on the GPU.
By following these steps, you can determine within the Python shell whether TensorFlow is leveraging GPU acceleration. If GPU usage is not detected, you may need to install or configure a GPU-supported TensorFlow version, or verify that drivers and CUDA are properly installed.