In TensorFlow 2, you can control where the model runs by setting the device context, specifically on GPU or CPU. This can be achieved using the tf.device context manager.
Example Steps:
-
Initialize TensorFlow and Detect Devices First, verify the available GPUs and CPUs in your system.
pythonimport tensorflow as tf gpus = tf.config.list_physical_devices('GPU') if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as e: print(e) cpus = tf.config.list_physical_devices('CPU') -
Define TensorFlow Operations Create TensorFlow operations, such as model training or data processing.
python
def compute_on_device(device_name, size=10000): with tf.device(device_name): random_matrix = tf.random.normal((size, size), mean=0, stddev=1) dot_product = tf.linalg.matmul(random_matrix, tf.transpose(random_matrix)) sum_result = tf.reduce_sum(dot_product) return sum_result
shell3. **Execute on CPU** Use `/CPU:0` as the device identifier to specify execution on the CPU. ```python result_cpu = compute_on_device('/CPU:0') print("Computed on CPU:", result_cpu)
-
Execute on GPU If GPUs are available, use
/GPU:0as the device identifier to specify execution on the first GPU. For multi-GPU systems, adjust the index (e.g.,/GPU:1) to target different GPUs.pythonif gpus: result_gpu = compute_on_device('/GPU:0') print("Computed on GPU:", result_gpu) -
Switch Back to CPU If needed, reuse
/CPU:0to run the same or different operations.pythonresult_cpu_again = compute_on_device('/CPU:0') print("Computed again on CPU:", result_cpu_again)
Summary:
This approach allows you to flexibly switch TensorFlow's computation between different devices. It is highly useful for optimizing performance, managing resources, and testing various hardware configurations. In practical applications, this device management enables developers to better control the training and inference environments of models.