cuDNN is installed for pytorch and tensorflow. Both are very specific and require to be built with a version of CUDA and cuDNN. So they are installed with the given pytorch and tensorflow.
cuDNN.h is a C/C++ code requirement and the C development headers are not installed with Lambda stack.
NVIDiA installs it but in a non-standard location (that is supposed to be only for user/site software in /usr/local).
You can install the package from NVIDIA for cuDNN but it requires registration and a EULA.
if you are using Anaconda - that install cuDNN, but does not correctly setup up the LD_LIBRARY_PATH for the library.
If you are running python venv, virtualenv or others, they do not install their flavor of cuDNN, so you would need to check the specific build you need to install.
If you are just doing C/C++ development then you can install in /usr/local and point to the include files there. Most of the CUDA C/C++ code are using various variables for hard coded paths in /usr/local. (And NVIDIA has not fixed this for over 15 years, of pointing it out so I do not expect it will get fixed).