Tensorflow 1.13 and theano 1.0.4 not using gpu

I created a virtualenv using the lambdastack. using tensorflow 1.13 and running:
tf.test.is_gpu_available returns a stack that includes:
“failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error” which is not terribly helpful.

A similar check for theano shows it’s not using the GPU.

Any tips?

so apparently TF 1.13 does not work with CUDA 10.1.
I’m guessing 10.1 is not the current lambda stack version. Does this mean I have another ubuntu auto update somewhere updating CUDA? If so, how can I find this?

update, I uninstalled and reinstalled the lambda stack. It did install with CUDA 10.1 and tensorflow 1.13. tensorflow.test.is_gpu_available still returns ‘False’ and the tensorflow GPU reqs page mentions support only for CUDA 10.0.

Anyone else run into this problem? Why is the lambda stack using versions that don’t seem compatible? Am I missing something?

What options do I have now?

I’m seeing this same issue in the latest lamda stack. Python2 seems to have GPU support in tensorflow, but not Python3. I’ve been trying to set up Horovod to distribute and it only works with the latest Lambda stack and Python2.7. It would would be awfully nice if Lambda would recompile and bump the minor version to fix this since it looks like someone left out a config flag.

In the meantime, I’ve taken the route of building Tensorflow-gpu from source for python3 and overwriting with a wheel of my own in pip3.

I am having a similar problem. I wrote Tensorflow code on an AWS instance with v1.12, and like the idea of using v1.13 on my in-house laptop, but cannot get it to engage the GPU. I’m using “test.gpu_device_name()” to check for use, but can see that the training times are roughly 100x normal. I have Python 2.7.15, Cuda 10.1, Nvidia-smi 418.43, all on an Ubuntu 18.04 system. Is there a solution that can get me Tensorflow on a GPU with the above software?