Get total amount of free GPU memory and available using pytorch

Hari Prasad picture Hari Prasad · Oct 3, 2019 · Viewed 9.3k times · Source

I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch.

Answer

prosti picture prosti · Oct 3, 2019

PyTorch can provide you total, cached and allocated info:

t = torch.cuda.get_device_properties(0).total_memory
c = torch.cuda.memory_cached(0)
a = torch.cuda.memory_allocated(0)
f = c-a  # free inside cache

Python bindings to NVIDIA can bring you the info for the whole GPU (0 in this case means first GPU device):

from pynvml import *
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
print(f'total    : {info.total}')
print(f'free     : {info.free}')
print(f'used     : {info.used}')

pip install pynvml

You may check the nvidia-smi to get memory info. You may use nvtop but this tool need to be installed from source (at the moment of writing this). Another tool where you can check memory is gpustat (pip3 install gpustat).

If you would like to use C++ cuda:

include <iostream>
#include "cuda.h"
#include "cuda_runtime_api.h"

using namespace std;

int main( void ) {
    int num_gpus;
    size_t free, total;
    cudaGetDeviceCount( &num_gpus );
    for ( int gpu_id = 0; gpu_id < num_gpus; gpu_id++ ) {
        cudaSetDevice( gpu_id );
        int id;
        cudaGetDevice( &id );
        cudaMemGetInfo( &free, &total );
        cout << "GPU " << id << " memory: free=" << free << ", total=" << total << endl;
    }
    return 0;
}