Stable Diffusion
Purpose of this plugin
The plugin allows the user to create AI-generated textures or any image using Stable Diffusion technology.
Like the Llama plugin that we created, the generation does not need internet access to work: the computations are carried out directly on the user's computer. The generation is performed relying on the GPU.
It is based on a C++ implementation available here: https://github.com/leejet/stable-diffusion.cpp.
Requirements
Due to the nature of the plugin and how heavy the calculations are, there are requirements for the plugin. Before you can use the plugin on your machine, you must first install CUDA on your local machine and then install a model. Please follow the instructions provided in the following tutorial.
CUDA Installation:
- To make fast generations, it is strongly advised to have CUDA installed on your computer. Be careful! It is only compatible with some Nvidia GPUs because Stable Diffusion uses CUDA (cuBLAS) to perform its calculations.
Check if the GPU (graphic card) is compatible with CUDA: https://developer.nvidia.com/cuda-gpus
If CUDA can be run, go on https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64 and choose your preferred version
Run the installer, and let's follow the instructions
If CUDA cannot be installed on the hardware, the plugin will still work. But as the generation will be done on the CPU, it will take a long time.
Model Installation:
To be able to generate textures or images on, you first have to download a model, which serves as the core of the Stable Diffusion AI. Think of the model as the AI's brain, it is what it relies on to generate images.
Stable Diffusion supports models in the .ckpt or .safetensors file format. Before downloading a model, make sure that it is compatible with Stable Diffusion 1.X or 2.X. Stable Diffusion XL models are not supported.
You can download models from Civitai https://civitai.com/models. You can then apply filters to select Stable Diffusion 1.X or 2.X models.
To begin with, we recommend using a fine-tuned model specifically created to generate tileable textures. Download from: https://civitai.com/api/download/models/18736
Basic Usage
The plugin adds three commands in the content browser:
- Generate Texture: By right clicking on a folder on the content browser, opens the window to generate an image and export it as a texture
- Generate Image: By right clicking on a folder on the content browser, opens the window to generate an image and export it as a PNG file at the given directory
- Unload Model: Once a texture or an image has been generated, you might not want to generate images for a while. Clicking this command simply unloads the memory allocated for Stable Diffusion in the VRAM/RAM.
When clicking on Generate Texture or Generate Image, a window pops up. Let's explain how to fill the different fields to customize the generation.
Prompt: What you want to draw
Negative Prompt: Everything that should NOT be drawn (not releveant when generating textures, use when generating other types of images)
Texture Name: The name of the texture
Export as Image: Tick the box if you want to export the texture as an image in PNG format (then give the name and the directory where to save the PNG image)
Steps: When a texture is being generated, the model starts from a blurry mix of random pixels (noise) and then iterates many times to modify the initial image to create the final image. This parameter changes how many times Stable Diffusion modify the initial image before the final output. Changing this paramater might improve image quality.
Width and Height: The width and the height of the picture. Be careful, many models are trained with pictures of specific height and width which is indicated when you download a model. Changing these parameters might cause Stable Diffusion to crash or make weird pictures.
Unload After Generation: Executes the "Unload Model" command automatically after the generation.
Seed: You can specify a seed to generate the image. Two generations with the same prompt and the same seed will generate the exact same output. -1 means "random seed".
NThreads: You can specify the number of physical cores (CUDA or CPU) used during generation. -1 means "total number of cores available"
-----------------------------------------------------------------------------------------------------------
Most users won't need to change the following parameters, they are here to let experimented users tweak their generations. Read the following resources for more information.
CFG Scale: More details here: https://automationswitch.com/cfg-scale-in-stable-diffusion/.
Scheduler and Sampling Method. More details: https://stable-diffusion-art.com/samplers/
Advanced usage
By using the blueprints of the plugin and Unreal Engine functionnalities, it is possible to script texture generations from a Data Table to automatically generate multiple textures at once !
We include an example project for any user containing the generation script and some exemples of generations made by using Stable Diffsusion. Three models were used:
- https://civitai.com/models/15873/texture-diffusion (for textures)
- https://civitai.com/models/204704/animal-faces-hq-v2-afhq-debias-estimation-loss-comparison-test (for animals)
- https://civitai.com/models/85137/landscape-realistic-pro (for landscapes)
The project contains:
1) The map above (First Person Map)
2) The folder StableDiffusion:
- blueprints --> contains the data table and the script to generate multiples textures at once
- textures --> the textures generated using AI prompts
- materials --> the materials displayed in the show room
Download project here:
https://mega.nz/file/rqAGiIrT#OCVxO7HYFOmiaolAkWkUOVK6_wgEIuqeWRbh-hKScX0