Stable diffusion in architecture - push your renders using AI
- Jonas Grumann
- Jan 15
- 5 min read
Things in AI are moving fast and it's hard to understand how to start with it. In this tutorial, we will guide you through using Stable Diffusion’s user interface (UI) and it's inpainting feature to enhance visualization presentations by seamlessly integrating additional details.
Step 1 - installing and starting Stable Diffusion
Prepare Your System:
Ensure your system meets the minimum requirements:
GPU: A modern NVIDIA GPU with at least 4GB VRAM (e.g., RTX 3070).
RAM: At least 8GB.
Operating System: Windows, macOS, or Linux.
Download Automatic1111’s Web UI (a popular Stable Diffusion frontend):
Visit the Automatic1111 GitHub repository.
Clone the repository or download the ZIP file:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
Install Python and Git:
Download and install Python 3.10.6.
During installation, enable the "Add Python to PATH" option.
Install Git from git-scm.com.
Set Up Stable Diffusion:
Locate the webui-user.bat file in the repository folder.
Double-click the BAT file to start the Web UI.
The necessary dependencies, including Python packages, will be installed automatically during the first run.
Run the Web UI:
Once the setup is complete, the Web UI will launch.
Open your browser and navigate to http://127.0.0.1:7860/.

Step 2: Choosing and installing the right model
This is part is subjective. Models are essentially different "brains" that you can install into Stable Diffusion. They have been trained on different data so they yield different results. I work with RealisticVision and I'm pretty happy with it. You can get it for free from CivitAi.
Once you have downloaded it place the file in your stable diffusion's install directory under models/Stable-diffusion
Step 3: Preparing Your Architecture Rendering
Export the Rendering:
Save your architectural visualization as a high-resolution PNG or JPG file.
Ensure the areas you want to inpaint (add or modify people/objects) are clear and accessible.
You can see the render we will work with down below. I decided to create an exterior render to showcase the AI capabilities since I think it works particularly well with nature assets, given its chaotic characteristics.

Step 4: Using the Stable Diffusion UI for Inpainting
It's time to start having fun! Click on the img2img tab to access the inpainting page: img2img means that we are starting from an image and we want to also generate an image.
On the page that pops up click on the "inpaint" button under the big white canvas.

Nice! Your workspace is all set up!
Now navigate to your image using your file explorer and drag it onto the canvas on the left.
Once you see your image in the canvas, click on the little marker icon on the top right of the canvas:

You're now in "paint" mode, start drawing over the parts of your render that you want to work on. The AI generation is pretty hard on the GPU so it can't do huge areas at once, so even if you select a big part of your image, the system will scale it down to the size you specify in the options under the image, do it's calculations, and then stretch it to match the original size. This is not a big issue at this stage, we're just trying to get a feel of what we want our finished image to look like. In our example, I selected the whole thing just to see what the AI can imagine.

Scroll down a little where all the options are. You could write a book about all these options so we won't go into details but here's what I usually do:
Set "Resize mode" to "Crop and resize"
"Mask mode" to "inpaint masked" - this means we want the AI to change the painted part of the image
"Masked content" to "Original" - this means that the AI will see the part of the image we painted and will start from that
"Sampling method" - you can try different ones here, I like to use "Euler A" and Karras for the Scheduler Type
Set width and width to something small like 768 and the height so that it respects your original image's aspect ratio.
The 4 things that I have left out are the ones you're going to change most:
The "Prompt" textbox at the top of the page. Here you should describe what you want the AI to generate, in my case I wrote: "modern cabin, conifers, warm sunlight, river, rocky beach, photorealistic, photorealistic, detailed". Optionally, you can add a negative prompt too in the "negative prompt" text field. These are things that the AI will try to avoid, for example you could write "3d, cgi, anime, cartoon" to exclude a certain type of style.
"Denoising strength" - this decides how creative or invasive the AI will be. Keep it at 0 and it'll yield the exact same image you gave it, put it to 1 and it will generate something completely different.
The CGF Scale decides how much the AI will listen to your prompt and how much it will invent. I find that keeping this around the default 7 is pretty good.
"Inpaint area": "Only masked" means that the AI will only see the part you painted so it doesn't know about what's around it. That's less heavy on the GPU but the results won't blend as well in the original image while "whole image" means the AI sees the whole image.
Denoising strength from 0 to 1, with 0.25 intervals. As you can see the first image is the exact one we had while the last is something completely different. Playing with the denoising strength is a good way to "brainstorm" and see where you'd like to take your render.
Now that we've got a general idea we can start fine tuning our image. From the images I created I can already tell that the smooth rocks look better than the one I had in my render. Also, the trees seem to need quite a bit of work. Let's start with the rocks:
I'll remove all the area I painted by clicking on the eraser icon on the canvas and then I'll paint only a small portion of rocks and change the prompt to "Rocky beach, river, shrubs, photorealistic". I only select a small part, that's because we can't keep a high resolution while generating and we'd loose quality.

I drag the Denoising strength down to 0.4 and click generate.
These rocks already look better than what I had before!


If you're not happy with what you're getting, click "generate" again or play with the settings until you're happy.
Once you think it looks good you can drag the image from the right canvas (the generated one) onto the canvas on the left, effectively replacing it. By doing this we're saying "I'm happy with it and I want to keep working from this new image".
Remember that all images are stored in the "output" directory in your Stable Diffusion's install directory. So if you make any mistake you can always go back and start from there.
I'll do that and paint over a new part of the rocky beach:
I posted the original image and the one with the AI beach. As you can see it looks much better!
From here on, all you have to do is rinse and repeat for other parts of the image. Here's what I came up with, I think it looks much better than the original.

Note: this tutorial is just about inpainting but there's much more about Stable Diffusion and AI. Also, we approached this by masking and rendering small parts over and over. This works but it's not ideal, there's a way around that where you can generate much bigger areas but we'll discuss that in another post.
Thanks for reading!















