Stable Diffusion Inference Server Deployment

Hi, you have a template to deploy stable diffusion to lambda GPU as a inference server than can be scaled (will distribute the word load to multiple GPUs) in needed?

I put together a tutorial that you might find helpful ā†’ How do I get started generating images from prompts?

Dream Factory, used in the tutorial, works well on multi-GPU instances.

1 Like

Thank you, Iā€™m looking for more like ready to deploy code sample that can create inference server that manages load-balancing so I will be able to call it as an api to generate images and it will support concurrent requests.