How to use GPU's when training a model?

I am sure this is a very simple question for most, so apologies for the dumb question.

I am starting work on a lambda vector GPU workstation.

My question is: given some code to train a model (in this case using tensorflow), how do I make sure the GPU’s are utilized while training this code?

Thank you for the help!

Hi,

You may find our tutorials in our blog helpful in this regard, especially since we have a short series specifically for Tensorflow.

Also, here is a Python command to use Tensorflow to ensure Tensorflow can see your GPUs:
python -c "import tensorflow as tf; print('Num GPUs Available: ', len(tf.config.experimental.list_physical_devices('GPU')))"

Otherwise you’ll need to edit your code/model to ensure the GPU(s) are being utilized.
If this doesn’t answer your question, let us know here or you can also open a support ticket here:

Regards,
Calvin Wallace
Linux Support Engineer

Running the code snippet shows:

Num GPUs Available: 0

How do I go about ensuring that tensorflow can see my GPU’s??

Can you see the GPU(s) when you run “nvidia-smi”?

Also, do you by any chance have overridden any of the Lambda Stack installed Python packages?
Can you share the output of “pip -v list”?