TensorFlow

Table of Contents

Datasets

GPU Configuration

Distributed Training (TF2)

Instantiate TF's Mirrored Strategy

# To available GPUs
mirrored_strategy = tf.distribute.MirroredStrategy()

# Specified GPUs
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])

Strategy Scope

Setup the model and optimizer within the strategy's scope to make them mirrored variables.

with mirrored_strategy.scope():
  model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
  optimizer = tf.keras.optimizers.SGD()

Resource: https://www.tensorflow.org/guide/distributed_training

Distributed Training with MNIST Dataset:

MNIST Dataset

Distributed Training (TF1)

Allocate GPU Memory

# Specify use of 85% GPU memory
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.85)

# Integrate within session parameter
with tf.Session(config = tf.ConfigProto(gpu_options = gpu_options)) as sess

GPU Acceleration

For a NVIDIA GPU configured for such applications, install NVIDIA CUDA Toolkit. Using the Anaconda Navigator, the following packages are installed in support of the tensorflow-gpu package.

tflow_select

cudatoolkit

cudnn

tensorflow

tensorflow-base

Once installed, the accessing the environment into which the above packages were installed can be used to confirm TensorFlow's recognition of the GPU with the following:

$ python
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
import tensorflow as tf
tf.test.gpu_device_name()

The above call returns output confirming its connection to the GPU.

Additional Hardware Resource:

Hardware & Driver Setup

Resources

Installation

Python - TensorFlow 2