Stable Diffusion

Purpose of this plugin

The plugin allows the user to create AI-generated textures or any image using Stable Diffusion technology.

Like the Llama plugin that we created, the generation does not need internet access to work: the computations are carried out directly on the user's computer. The generation is performed relying on the GPU.

It is based on a C++ implementation available here:


Due to the nature of the plugin and how heavy the calculations are, there are requirements for the plugin. Before you can use the plugin on your machine, you must first install CUDA on your local machine and then install a model. Please follow the instructions provided in the following tutorial.

CUDA Installation:
- To make fast generations, it is strongly advised to have CUDA installed on your computer. Be careful! It is only compatible with some Nvidia GPUs because Stable Diffusion uses CUDA (cuBLAS) to perform its calculations.

If CUDA cannot be installed on the hardware, the plugin will still work. But as the generation will be done on the CPU, it will take a long time.

 Model Installation:
To be able to generate textures or images on, you first have to download a model, which serves as the core of the Stable Diffusion AI. Think of the model as the AI's brain, it is what it relies on to generate images.

Stable Diffusion supports models in the .ckpt or .safetensors file format. Before downloading a model, make sure that it is compatible with Stable Diffusion 1.X or 2.X. Stable Diffusion XL models are not supported.

You can download models from Civitai You can then apply filters to select Stable Diffusion 1.X or 2.X models.

To begin with, we recommend using a fine-tuned model specifically created to generate tileable textures. Download from:

Basic Usage

The plugin adds three commands in the content browser:
- Generate Texture:  By right clicking on a folder on the content browser, opens the window to generate an image and export it as a texture
- Generate Image:  By right clicking on a folder on the content browser, opens the window to generate an image and export it as a PNG file at the given directory
- Unload Model: Once a texture or an image has been generated, you might not want to generate images for a while. Clicking this command simply unloads the memory allocated for Stable Diffusion in the VRAM/RAM.

When clicking on Generate Texture or Generate Image, a window pops up. Let's explain how to fill the different fields to customize the generation. 


Most users won't need to change the following parameters, they are here to let experimented users tweak their generations. Read the following resources for more information.

Advanced usage

By using the blueprints of the plugin and Unreal Engine functionnalities, it is possible to script texture generations from a Data Table to automatically generate multiple textures at once ! 

We include an example project for any user containing the generation script and some exemples of generations made by using Stable Diffsusion. Three models were used:

- (for textures)
- (for animals)
- (for landscapes)

The project contains:
1) The map above (First Person Map)
2) The folder StableDiffusion:
- blueprints --> contains the data table and the script to generate multiples textures at once
- textures --> the textures generated using AI prompts
- materials --> the materials displayed in the show room