Lambda workstation gpu not recognized

After some lambda software updates, my lambda workstation no longer recognizes the GPUs in the machine from pytorch.

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

/home/jupyter-ggilley/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at  /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0

My software seems to be all up-to-date. Any suggestions?

Greg

It looks like you torch/conda is not using Lambda Stack, it using your ~/.local pip installed software.

$ mv ~/.local ~/.local.backup
$ cat torch-checks.py
import torch

print("\nPytorch version: ",torch.__version__)
print(torch._C._cuda_getCompiledVersion(), "cuda compiled version")
print("\ntorch",torch.__file__)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print("Device name: ", torch.cuda.get_device_name(device))
print("Device properties: ", torch.cuda.get_device_properties(device))
print("Device_count: ", torch.cuda.device_count())

$ python ./torch-checks.py
...