I have received and love my tensorbook. What is the correct way to set CUDA environment variables with Lambda Stack? I am getting the following error when I try to run tensorboard:
$ tensorboard --logdir . --host localhost
2020-11-29 14:11:49.718836: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library ‘libcudart.so.11.0’; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
I think it may be because PATH and LD_LIBRARY_PATH are not setup correctly on my Tensorbook per
Nvida post-installation actions. However, I can’t find the folder these are installed on my tensorbook.
I have the following version of Lambda Stack installed:
$ apt search ‘lambda-stack’
Sorting… Done
Full Text Search… Done
lambda-stack-cpu/unknown,unknown 0.1.12~20.04.3 all
Deep learning software stack from Lambda Labs (CPU)
lambda-stack-cuda/unknown,unknown 0.1.12~20.04.3 all [upgradable from: 0.1.12~20.04.1]
Deep learning software stack from Lambda Labs (CUDA)
I tried Path for labmda stack - DeepTalk - Deep Learning Community and could not find the path to CUDA libraries.
Please help. What is the path to the CUDA libraries for Lambda Stack? Is the Lambda Stack supposed to already include the NVIDIA post installation steps? Am I missing some simple way to activate the correct NVIDIA environment via the Lambda Stack?