When I run a keras script, I get the following output:
Using TensorFlow backend.
2017-06-14 17:40:44.621761: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.1 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621783: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621788: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621791: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621795: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available
on your machine and could speed up CPU computations.
2017-06-14 17:40:44.721911: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:40:44.722288: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 3.69GiB
2017-06-14 17:40:44.722302: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:40:44.722307: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:40:44.722312: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
What does this mean? Am I using GPU or CPU version of tensorflow?
Before installing keras, I was working with the GPU version of tensorflow.
Also sudo pip3 list
shows tensorflow-gpu(1.1.0)
and nothing like tensorflow-cpu
.
Running the command mentioned on [this stackoverflow question], gives the following:
The TensorFlow library wasn't compiled to use SSE4.1 instructions,
but these are available on your machine and could speed up CPU
computations.
2017-06-14 17:53:31.424793: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424803: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424812: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424820: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.540959: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:53:31.541359: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 128.12MiB
2017-06-14 17:53:31.541407: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:53:31.541420: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:53:31.541441: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
2017-06-14 17:53:31.547902: E
tensorflow/stream_executor/cuda/cuda_driver.cc:893] failed to
allocate 128.12M (134348800 bytes) from device:
CUDA_ERROR_OUT_OF_MEMORY
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
2017-06-14 17:53:31.549482: I
tensorflow/core/common_runtime/direct_session.cc:257] Device
mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
You are using the GPU version. You can list the available tensorflow devices with (also check this question):
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) # list of DeviceAttributes
EDIT:
With tensorflow >= 1.4 you can run the following function:
import tensorflow as tf
tf.test.is_gpu_available() # True/False
# Or only check for gpu's with cuda support
tf.test.is_gpu_available(cuda_only=True)
EDIT 2:
The above function is deprecated in tensorflow > 2.1
. Instead you should use the following function:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
NOTE:
In your case both the cpu and gpu are available, if you use the cpu version of tensorflow the gpu will not be listed. In your case, without setting your tensorflow device (with tf.device("..")
), tensorflow will automatically pick your gpu!
In addition, your sudo pip3 list
clearly shows you are using tensorflow-gpu. If you would have the tensoflow cpu version the name would be something like tensorflow(1.1.0)
.
Check this issue for information about the warnings.