In TensorFlow, you can use the tf.config.experimental.list_physical_devices method to check for available devices, including GPUs. This method returns a list of devices, which you can further inspect to identify if they are GPUs.
Here is an example step-by-step guide on how to retrieve the currently available GPUs in TensorFlow:
-
Import necessary libraries: First, import the TensorFlow library. If you haven't installed TensorFlow, you can install it via pip.
pythonimport tensorflow as tf -
List all physical devices: Use the
tf.config.experimental.list_physical_devicesmethod to list all physical devices.pythondevices = tf.config.experimental.list_physical_devices() print("All devices:", devices) -
Filter out GPU devices: You can filter out GPU devices by checking the device type.
pythongpus = tf.config.experimental.list_physical_devices('GPU') print("Available GPUs:", gpus)
If you run the above code and there are available GPUs in the system, it will print the list of GPU devices. If no GPUs are available, the list will be empty.
For example, in my own development environment, using the above code to check available GPUs, the output might look like:
shellAll devices: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Available GPUs: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
This indicates that my system has one CPU and one GPU device, and the GPU is available.
This feature is very useful for distributed training on machines with multiple GPUs, as it allows programs to dynamically discover and utilize available GPUs.