Is it possible to use Stable Diffusion Automatic 1111 on own computer, establish an SSH connection to the Lambda server, and then utilize the local PC for running Stable Diffusion while offloading the GPU computation to Lambda?
So just working with my PC as usual, and my graphics card is essentially just a bit further away.
There are no tutorials available online for this.
Installing Stable Diffusion on a Lambda server is nearly impossible.
Using the Lambda demos doesn’t make sense; when attempting to download addons or similar, an error message appears stating, “You are not allowed to use the demo, these are not original files.”
Yes it is possible. Installing stable diffusion on the cloud is the easier way though, instructions are available and many have used this.
Automatic 1111 implementation is a bit more finicky scripts, another option is docker image on cloud.
The important thing to consider is the speed of your desktop and network connection for doing remote GPU processing. (The GPU needs the data and communication). Distributed and parallel GPU processing has been around for two decades (ideally tightly coupled low latency network)
You would need to fix Automatic 1111 to support this.
For pytorch has added this via:
Getting Started with Distributed RPC Framework — PyTorch Tutorials 2.0.1+cu117 documentation
Combining Distributed DataParallel with Distributed RPC Framework — PyTorch Tutorials 1.8.1+cu102 documentation
Distributed RPC Framework — PyTorch master documentation
I’m curious to know what you mean by this. I’m able to install and run AUTOMATIC1111 on Lambda GPU Cloud instances with no issues.
Check out the dstack open-source project. It allows you to run any LLM in any cloud (incl. Lambda) either using the CLI or Python API.
There are many examples on the website.
Disclaimer: I’m the core developer at dstack.