cuDNN Install w/ Lambda Stack

I have the latest version of Lambda stack installed on a Lambda Vector workstation. I understand that cuDNN is installed as part of this stack, but it does not appear to be installed in a standard location. Specifically, I am in need of using the cuDNN.h file and I can’t find it. I did find a very old post here that said to install it from NVIDIA, but it does not indicate if this install will mess up the Lambda stack configuration. Can anybody comment on this?

cuDNN is installed for pytorch and tensorflow. Both are very specific and require to be built with a version of CUDA and cuDNN. So they are installed with the given pytorch and tensorflow.

cuDNN.h is a C/C++ code requirement and the C development headers are not installed with Lambda stack.

NVIDiA installs it but in a non-standard location (that is supposed to be only for user/site software in /usr/local).
You can install the package from NVIDIA for cuDNN but it requires registration and a EULA.

if you are using Anaconda - that install cuDNN, but does not correctly setup up the LD_LIBRARY_PATH for the library.
export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib:${LD_LIBRARY_PATH}

If you are running python venv, virtualenv or others, they do not install their flavor of cuDNN, so you would need to check the specific build you need to install.

If you are just doing C/C++ development then you can install in /usr/local and point to the include files there. Most of the CUDA C/C++ code are using various variables for hard coded paths in /usr/local. (And NVIDIA has not fixed this for over 15 years, of pointing it out so I do not expect it will get fixed).

1 Like