Pytorch DataParallel two gpu Hang

Hi, I have two lambda dual with two TITAN RTX. One pc works fine. But another pc freeze when using Dataparallel.
I got the not working one last mouth.

I tried the solution in this

But got an error message.

And I also try to disable iommu following the following methods. But I found that the iommu is already disabled by default.

Are there any other solutions?


Hi! Because you’re a Lambda customer, can you please email with your issue? We should be able to fix it quickly.