How to get PyTorch to recognize gpus when using Anaconda

Hi,

I recently received a Lambda Labs desktop with two gpus. I’m using Anaconda, which as I’ve seen on other posts overwrites the Lambda stack. I’m having trouble getting PyTorch to use my desktop’s gpus. Specifically, torch.cuda.is_available() returns False.

I’m pretty sure that the problem is occurring because the machine shipped with CUDA version 11.2, but PyTorch only currently supports CUDA up to version 11.1 (Start Locally | PyTorch). Out of curiosity, I downloaded the latest NVIDIA PyTorch Docker container, and within that container, torch.cuda.is_available() returned True. Is there any way I can get PyTorch to recognize my gpus without downgrading my CUDA version? If not, what’s the cleanest way to downgrade? I’d rather not deal with the overhead of a container for now, and I couldn’t even set up port forwarding on the container properly, so that’s not my preferred solution.

Thanks!

2 Likes

There is no need to downgrade

  1. Create a new env:
    conda create -n pytorch python=3.8
  2. enter the env
    conda activate pytorch
  3. install pytorch as in docs
    conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
1 Like