GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation

I am using this github link to understand a few things. The pytorch version is different what is on Lambda Stack, so I create a conda environment and installed all dependencies. I am getting the above error/warnings. What is the best way to deal with this and what are the best practices while using virtual environments ? As i might need different versions of pytorch/cuda depending on the project

The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
1 Like

I encountered this error also. I found the reason is torch.version.cuda is 10.2. this is too old for 3090, i updated my torch to torch==1.8.1+cu111 ,and error solved.

You could [conda create] some env and by [conda activate your_env_name] ,switch to specific environment and pip there. maybe you could also use conda install rather than pip install. but both are ok. use conda list or pip list to see what are installed.

1 Like