Run Stable Diffusion models in the cloud

November 19, 2023 (Today)

Run AUTOMATIC1111/stable-diffusion-webui on the cloud, for (almost) free

Using stable-diffusion-webui requires a lot of power. My Macbook Pro 14 running on 16GB of memory and the (now over 2 years old!) M1 Pro chip is extremely slow on the tasks that things such as detailing, more steps, and bigger models like SDXL require. So here is how I've been running automatic1111/webui on RunPod.io, renting a GPU of my choice hourly.

HINT 💡: To save money, only rent a GPU for really computational tasks, like detailers, face swappers, etc. I prefer to use most models on my M1 Pro, which only takes a max of 1 minute for even the most realistic models (eg. EpiCRealism ).

  1. Sign in to RunPod.io

  2. Add any amount you would like, I added $10 which is enough for A LOT.

  3. Go to Community Cloud (or secure, up to you).

  4. Pick a GPU. For reference, a 3090 does most of my ADetailer/Reactor tasks in under a minute (compared to my laptop's hour)

  5. Click the Search for a template box

  6. Start typing "Stable Diffusion" and click "RunPod Stable Diffusion":

  7. Customize storage depending to your preference, otherwise click ContinueDeploy The defaults usually work for me when I use one model and some extras.

  8. Once deployed, you are ready to open webui. Click the first button, HTTP Service [Port 3000] and then Jupyter Lab [Port 8888] to open the notebook.

  9. To get the output images, go to the output folder in the notebook app.

Need help setting this up for your business? 📞Book me for a call.