My machine has Geforce 940mx GDDR5 GPU.
I have installed all requirements to run GPU accelerated dlib (with GPU support):
CUDA 9.0 toolkit with all 3 patches updates from https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal
cuDNN 7.1.4
Then I executed all those below command after cloning dlib/davisKing repository on Github for compliling dlib with GPU support :
$ git clone https://github.com/davisking/dlib.git
$ cd dlib
$ mkdir build
$ cd build
$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
$ cmake --build .
$ cd ..
$ python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA
Now how could I possibly check/confirm if dlib(or other libraries depend on dlib like face_recognition of Adam Geitgey) is using GPU inside python shell/Anaconda(jupyter Notebook)?
In addition to the previous answer using command,
dlib.DLIB_USE_CUDA
There are some alternative ways to make sure if dlib is actually using your GPU.
Easiest way to check it is to check if dlib recognizes your GPU.
import dlib.cuda as cuda
print(cuda.get_num_devices())
If the number of devices is >= 1 then dlib can use your device.
Another useful trick is to run your dlib code and at the same time run
$ nvidia-smi
This should give you full GPU utilization information where you can se ethe total utilization together with memory usage of each process separately.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 0% 52C P2 36W / 151W | 763MiB / 8117MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1042 G /usr/lib/xorg/Xorg 18MiB |
| 0 1073 G /usr/bin/gnome-shell 51MiB |
| 0 1428 G /usr/lib/xorg/Xorg 167MiB |
| 0 1558 G /usr/bin/gnome-shell 102MiB |
| 0 2113 G ...-token=24AA922604256065B682BE6D9A74C3E1 33MiB |
| 0 3878 C python 385MiB |
+-----------------------------------------------------------------------------+
In some cases the Processes box might say something like "processes are not supported", this does not mean your GPU cannot run code but it does not just support this kind of logging.