I am looking for a simple way of verifying that my TF
graphs are actually running on the GPU.
PS. It would also be nice to verify that the cuDNN
library is used.
There are several ways to view op placement.
Add RunOptions and RunMetadata to the session call and view the placement of ops and computations in Tensorboard. See code here: https://www.tensorflow.org/get_started/graph_viz
Specify the log_device_placement option in a session ConfigProto. This logs to console which device the operations are placed on. https://www.tensorflow.org/api_docs/python/tf/ConfigProto
View GPU usage in the terminal using nvidia-smi.