Ai Hairstyles in Stable Diffusion!

Sebastian Torres
23 Jul 202303:13

TLDRIn this informative video, Sebastian demonstrates two methods for altering hairstyles and colors in Stable Diffusion. The first method involves using Control Net, an extension that offers greater control over the final image. By sketching the desired style on a black and white layer in a photo editing app, viewers can guide the transformation process. The second method leverages a trained style file for specific hairstyles. Sebastian also shares tips on creating color palettes and adjusting settings for optimal results, encouraging experimentation for achieving the best outcomes.


  • 🎨 Use Control Net extension in Stable Diffusion for more control over image output.
  • πŸ–ŒοΈ Start by setting up a photo editing application like Photoshop with a new black-filled layer for drawing the desired hairstyle.
  • 🌟 Use a thin brush tool in white to sketch the new hairstyle on the black layer.
  • πŸ–ΌοΈ In Stable Diffusion, use Realistic Vision in Painting for impainting the hairstyle.
  • πŸ“Έ Load the original photo into the impainting panel and keep the prompt simple, specifying 'raw photo, blonde hair, high detail'.
  • 🎭 Set the impainting area, sampling method (DPM++ SDE Karis), and sampling steps to 40.
  • πŸ“ Match the aspect ratio of the original image, with a maximum length of 1024 pixels.
  • πŸ”„ Use CFG scaling of 3.5 and adjust denoising strength between 0.6 and 0.7.
  • πŸ“„ For specific hairstyles, use a trained Laura file from Civic AI website.
  • 🎨 Create a color palette in Photoshop to guide Stable Diffusion in replicating background colors.
  • πŸ”„ Experiment with different Control Net settings to achieve the desired results.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is teaching how to change hairstyles and colors in stable diffusion using two quick and easy methods.

  • What is the first method mentioned for changing hairstyles and colors?

    -The first method mentioned is using control net, an extension that can be installed in stable diffusion to have more control over the final image.

  • Where can the installation for control net controller be found?

    -The installation for control net controller can be found in the extensions tab of stable diffusion.

  • What is the purpose of using a photo editing application like Photoshop in this process?

    -A photo editing application like Photoshop is used to make two new layers and draw the desired hairstyle on one of them with a thin brush tool set to white over a black background.

  • How does one load the photo into stable diffusion for the hairstyle change?

    -The photo is loaded into the impainting panel of stable diffusion, with settings adjusted to match the aspect ratio and resolution of the original image.

  • What is the role of the control net panel in this process?

    -The control net panel is used to reprocess the image with settings like County reprocess, Pixel Perfect, and control weight to ensure the desired changes are accurately applied.

  • What is a Laura file and when might it be necessary to use one?

    -A Laura file is a model trained on specific hairstyles. It might be necessary to use one if the desired hairstyle or color cannot be easily generated with the initial method.

  • How can a color palette be created in Photoshop to assist stable diffusion?

    -A color palette can be created in Photoshop by sketching with a large brush in different colors to create various effects, and selecting colors from the background to paint around the hair colors.

  • What are the settings for the color reference image in the second tab of the controller?

    -The settings include setting the preprocessor to t2ya color grid model, using the t2y adapter color, and sd14b1 model, with the processor resolution set to 2048 pixels.

  • What is the bonus tip provided at the end of the video?

    -The bonus tip is that none of the settings are set in stone, and one can experiment with turning the color grid preprocessor to none for a cleaner result, as it forces the model to use the full image without reducing it to hard pixels.

  • How can viewers engage with the content and get more information?

    -Viewers can like and subscribe to the video to support the channel, and they can ask questions or seek further clarification in the comments section.



🎨 Introduction to Changing Hairstyles and Colors in Stable Diffusion

This paragraph introduces Sebastian, the host, and sets the stage for a tutorial on altering hairstyles and colors using Stable Diffusion. Sebastian explains that there are two methods to achieve this, with the first being the use of a Control Net extension. He guides viewers to find the installation for the Control Net in the extensions tab of Stable Diffusion. The Control Net Controller is described as an extension that offers more control over the final image produced by Stable to Fusion. The paragraph concludes with a brief mention of using a photo editing application like Photoshop to make changes to a photo's hairstyle.



πŸ’‘Stable Diffusion

Stable Diffusion is an AI-based image generation and manipulation tool that allows users to modify images in various ways, such as changing hairstyles and colors. In the context of the video, it is the primary software used to demonstrate the process of altering a model's appearance, specifically focusing on hair changes.

πŸ’‘Control Net

Control Net is an extension for Stable Diffusion that provides users with more control over the final output of the generated images. It allows for precise adjustments by using sketches or other reference images to guide the AI in achieving the desired result. In the video, Control Net is used to ensure the hairstyle changes are accurately applied to the model.


Photoshop is a widely-used photo editing software that allows users to manipulate images digitally. In the video, it is mentioned as a tool to create a black and white sketch of the desired hairstyle, which will then be used in conjunction with Stable Diffusion to change the hairstyle in the original photo.

πŸ’‘Realistic Vision

Realistic Vision is a feature within Stable Diffusion that enables users to generate images with a more realistic and natural look. It is used in the video to ensure that the edited photo maintains a high level of detail and authenticity after the hairstyle and color changes.

πŸ’‘Pixel Perfect

Pixel Perfect refers to the high-quality and detailed output that is achieved when using Control Net in Stable Diffusion. It ensures that the edited image, particularly the hairstyle, is crisp and accurately rendered without any distortion or loss of detail.

πŸ’‘Laura File

A Laura File is a specific type of training data used in Stable Diffusion to recognize and generate particular styles, such as hairstyles. It is used when the desired hairstyle is not easily generated by the standard settings and requires specialized recognition by the AI.

πŸ’‘Color Palette

A Color Palette refers to a set of colors chosen for a particular purpose, such as in image editing or design. In the video, a color palette is created in Photoshop to help guide Stable Diffusion in replicating the desired hair color and matching the background colors for a cohesive and natural-looking result.


A Preprocessor in the context of Stable Diffusion is a tool or setting that prepares the input data for the AI model to generate the desired output. It can include adjustments like color grids or other parameters that influence how the AI interprets and processes the input.


Denoising is the process of reducing noise or unwanted artifacts in an image to improve its quality. In the context of the video, denoising strength in Stable Diffusion is adjusted to refine the output and ensure a clearer, more polished final image after the hairstyle and color changes.

πŸ’‘CFG Scale

CFG Scale is a parameter in Stable Diffusion that controls the level of detail or 'sharpness' of the generated image. It is used to adjust the output to match the desired level of clarity and definition, ensuring that the edited features, such as the hairstyle, are crisp and well-defined.

πŸ’‘Aspect Ratio

Aspect Ratio refers to the proportional relationship between the width and height of an image. Maintaining the aspect ratio is crucial in image editing to ensure that the image does not become distorted when resized. In the video, it is emphasized to match the aspect ratio of the original image when setting up the parameters in Stable Diffusion.


Sebastian teaches how to change hairstyles and color in stable diffusion using two methods.

Control net is an extension for stable diffusion that offers more control over the final image.

The installation for control net controller can be found in the extensions tab of stable diffusion.

Photo editing applications like Photoshop are used in conjunction with stable diffusion for editing.

Create two new layers in Photoshop, one filled with black, to draw the desired style on.

In stable diffusion, use realistic Vision in painting for the process, available on the Civic AI website.

Load the photo into the impending panel and keep the prompt simple for initial setup.

Use specific settings like DPM plus plus sde Caris sampling steps to 40 for the impainting process.

Adjust the CFG scale and denoising strength based on the results.

Control net panel settings include County reprocess, none enable, and Pixel Perfect control weight.

Use a Laura file trained on specific hairstyles for more complex style changes.

Create a color palette in Photoshop to help stable diffusion replicate background colors.

The second tab in controller is used to load color reference images and adjust settings for color change.

Changing the prompt to specify hair color while leaving the rest as is can generate the desired result.

The video provides a bonus tip on using preprocessors set to none for cleaner results.

Experiment with different control net models and preprocessor settings to achieve various results.