I am using this github link to understand a few things. The pytorch version is different what is on Lambda Stack, so I create a conda environment and installed all dependencies. I am getting the above error/warnings. What is the best way to deal with this and what are the best practices while using virtual environments ? As i might need different versions of pytorch/cuda depending on the project
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))