Using Virtual Environments with Lambda Stack

I am using the Tensorbook, which comes with the preinstalled libraries as part of the Lambda Stack, but I don’t want to be developing in the root environment and want to create a new Virtual Environment.

Is there a best practices for this, or commands, or recommended process to get the Lambda Stack for virtual environments?

We’re still establishing the best practices. There are two main choices I see:

  1. Use Lambda Stack as an easy way to install your Drivers, CUDA, CuDNN, etc.
  2. Use Lambda Stack’s version of TensorFlow / PyTorch.

If you decide to go for type 1, you’ll simply install Lambda Stack and then create your virtualenv like this:

virtualenv -p python3 your_venv

If you decide to go for type 2, you’ll install Lambda Stack and then create your virtualenv like this:

virtualenv --system-site-packages -p python3 your_venv

Essentially in type 2 you’ll be using our TensorFlow / PyTorch packages and type 1 you’ll be installing them yourself.


Since this is from about a year ago, I wondered if you could update if there are now best practices for virtual environments? Thanks.

1 Like

Are there any updates on this?

@sabalaba Thank you. If we use the 2nd option:
virtualenv --system-site-packages -p python3 your_venv
and then install additional packages from an additional requiremnts.txt file
would the additional packages be installed in the venv, or where system site packages are installed?

I have a lambda-labs tensor book.
We want to use Data Science cookie cutter for our project: Home - Cookiecutter Data Science which suggests, as a best practice, to have a separate python venv for each project.

Is there a way to do this (option 2) using IDE like PyCharm?

@rajatalak ajatalak

Here is the two ways that PyCharm seems to support VirtualEnv:

  1. It can use a existing virtualenv
  2. It can create a virtualenv

Documents at:
Configure a virtual environment | PyCharm Documentation


Is there any way to achieve the same with pipenv?

It should be pretty much the same. (but below are the ones we have written up)

We have documented this for:
- Docker - NVIDIA NGC Tutorial: Run a PyTorch Docker Container using nvidia-container-toolkit on Ubuntu
- python venv -
- virtualenv -
- Anaconda/miniconda - Setting up environments: Anaconda
(I missed the step in this that I installed cuDNN or you could use the anaconda path).
* Anaconda is missing a step of setting the LD_LIBRARY_PATH for its cuDNN so that
needs to be manually done. (The following will fix that):

I hope that helps