I attempted to build PyTorch from source as recommended.
Specs: GPU: GT 710, Driver 460.32.03, CUDA 11.2, Python 3.8.5.
Test:
import torch
print(torch.__version__) -- 1.7.1
print(torch.cuda.is_available()) -- True
print(torch.backends.cudnn.enabled) -- True
device = torch.device('cuda')
print(torch.cuda.get_device_properties(device)) -- _CudaDeviceProperties(name='GeForce GT 710', major=3, minor=5, total_memory=1998MB, multi_processor_count=1)
print(torch.tensor([1.0, 2.0]).cuda()) --RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable 2 Answers
I think your problem is related to cuda cc version that is installed with your card. From what I know you should compile from source and make sure cuda cc is compatible with pytorch version. Please refer to
What seems to be fine:
• CUDA 11.2, uses Linux Driver >=460.27.03, You have 460.32.03
• Major + Minor Values = 3.5, need at least 3 computational power to use Cuda 11.2
• Python Version is 3.8.5 recommended is => 3.6 You used "built from source install"I do not think this is compatibility driven issue for the most part (Although some GT 710 were reported to have under the computational power of 3, which is not the case here) Any ponderances about compatibility can be found here "" There is a query command to double check CUDA verification on your GPU: This should be located in your CUDA installation folder in the extras section.
Use Command nvidia-smi to see what processes are using the graphics card and show result. Also I would recommend running the aforementioned device query as well and posting a picture.
If there are already processes on the Graphics card you can kill them withnvidia-smi | grep 'python' | awk '{ print $3 }' | xargs -n1 kill -9 Where 'python' would be the name of the process