In Colaboratory, CUDA cannot be used for the torch

biao_biao picture biao_biao · Mar 27, 2019 · Viewed 14.6k times · Source

The error message is as follows:

RuntimeError   Traceback (most recent call last)
<ipython-input-24-06e96beb03a5> in <module>()
     11
     12 x_test = np.array(test_features)
---> 13 x_test_cuda = torch.tensor(x_test, dtype=torch.float).cuda()
     14 test = torch.utils.data.TensorDataset(x_test_cuda)
     15 test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)

/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
    160 class CudaError(RuntimeError):
    161     def __init__(self, code):
--> 162         msg = cudart().cudaGetErrorString(code).decode('utf-8')
    163         super(CudaError, self).__init__('{0} ({1})'.format(msg, code))
    164

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51

Answer

BlankSpace picture BlankSpace · Apr 17, 2019

Click on Runtime and select Change runtime type.

Now in Hardware Acceleration, select GPU and hit Save.