How to create consistent character with Stable Diffusion in ComfyUI

Stable Diffusion Art
24 May 202412:37

TLDRThis tutorial demonstrates how to create a consistent character using Stable Diffusion in ComfyUI. It guides viewers through downloading the workflow, installing missing custom nodes, and utilizing models like Proto XL. The process involves uploading a composition image to Control Net Candy and an IP adapter image to capture facial features. The workflow also includes a face detailer to enhance small faces in the image. Customization tips are provided, and the video concludes with a summary of the steps and a link to a similar workflow for Automatic 1.1.

Takeaways

  • 📥 Download the workflow and models from the provided links.
  • 🔧 Load the workflow into ComfyUI by dragging and dropping it.
  • ⚙️ Install any missing custom nodes using the manager.
  • 📦 Download the Proto XL model as it works well with the workflow.
  • 🖼️ Use ControlNet to fix the composition of the image.
  • 🎨 Upload a composition image to the ControlNet preprocessor.
  • 📸 Use IP Adapter to copy the face from another image.
  • 🔄 Run the workflow and use the face detailer to fix small faces.
  • 🔄 Disable face detailer for faster customization of hair color or background.
  • 🖼️ Use both an image and a prompt to control the final output.

Q & A

  • What is the main topic of the video?

    -The video is about creating a consistent character using Stable Diffusion in ComfyUI.

  • What are the resources mentioned in the video that the viewer can download?

    -The viewer can download the workflow, models, and other necessary components for the Stable Diffusion process from the links provided in the description.

  • What is the first step to follow according to the video?

    -The first step is to visit the provided link in the description, download the workflow, and load it in ComfyUI.

  • What is meant by 'rack boxes' in the video?

    -'Rack boxes' refer to missing components in the workflow that need to be installed to complete the setup in ComfyUI.

  • How does one address the missing custom notes in the workflow?

    -To address missing custom notes, one should go to the manager, install the missing custom notes, and restart ComfyUI.

  • What is the recommended model to start with in the workflow?

    -The recommended model to start with is Proto XL, as it has been tested and confirmed to work with the workflow.

  • What is the purpose of the control net in the workflow?

    -The control net is used to fix the composition of the image, allowing the system to extract the outline for consistent character creation.

  • What is the role of the IP adapter in the workflow?

    -The IP adapter is used to copy the face and hair from an image, ensuring that only these elements are transferred without the rest of the image.

  • What is the function of the pH DETeller in ComfyUI?

    -The pH DETeller is a custom note that detects faces in the image and performs automatic inpainting to fix them at a higher resolution.

  • How can one customize the character in the workflow?

    -Customization can be done by changing the prompt and using the IP adapter to control the final image, as well as using a fixed seed for the K sampler to save time.

  • What is the final step after running the workflow?

    -The final step is to right-click and save the high-resolution, fixed character image to your local storage.

Outlines

00:00

🎨 Creating Consistent Characters with Comfy UI and Stable Diffusion

The speaker introduces a tutorial on crafting consistent characters using Stable Diffusion within Comfy UI. They guide viewers to download necessary workflows and models from provided links in the description, emphasizing the need to install any missing custom nodes through the Comfy UI manager. The process involves uploading a composition image to a control net preprocessor to establish the character's outline, which is then used in conjunction with an IP adapter to transfer specific facial features. The workflow is tested with the Proto XL model, and the speaker promises a detailed explanation of its operation.

05:14

🖼️ Enhancing Image Details with Phase Detailer in Comfy UI

This paragraph delves into the use of the Phase Detailer, a feature within Comfy UI's Impact Pack, which serves a similar function to the face-fixing tool in Automatic1111. The Phase Detailer detects and automatically in-paints faces in images, improving the resolution and detail of small faces that Stable Diffusion might not render accurately at first. The speaker demonstrates how to use this tool in conjunction with the control net and IP adapter, and provides tips on customizing the workflow by adjusting prompts and muting the Phase Detailer for faster iterations.

10:16

🔧 Customizing and Finalizing the Workflow in Comfy UI

The final paragraph focuses on customizing the workflow by changing prompts and selectively using the Phase Detailer. The speaker explains how to control the final image with both an IP adapter image and a custom prompt, highlighting the independence and complementarity of these two conditioning methods. They also discuss a trick to save rendering time by using a fixed seed in Comfy UI, which reuses the cached result if the seed hasn't changed. The tutorial concludes with a summary of the workflow and an invitation for viewers to explore a similar workflow for Automatic1111, with links provided in the description.

Mindmap

Keywords

💡Stable Diffusion

Stable Diffusion is a type of artificial intelligence model used for generating images from textual descriptions. In the context of the video, it is the core technology that the workflow is built upon, allowing for the creation of consistent character images. The script mentions using Stable Diffusion in conjunction with ComfyUI to achieve this.

💡ComfyUI

ComfyUI is a user interface for working with AI models like Stable Diffusion. It is highlighted in the script as the platform where the workflow is loaded and operated. The video's instructions are tailored to guide users on how to use ComfyUI effectively to create consistent character images.

💡Workflow

A workflow in this video refers to a sequence of steps or operations involved in creating an image with Stable Diffusion using ComfyUI. The script details the process of downloading, installing, and executing this workflow to achieve the desired outcome of a consistent character.

💡ControlNet

ControlNet is a feature within the workflow that helps fix the composition of an image. The script describes how an image is uploaded to the ControlNet preprocessor to extract its outline, which is then used to maintain consistency in the character's depiction.

💡IP Adapter

IP Adapter is a component of the workflow that is used to copy specific elements, such as a face, from an image. The script explains that it uses phase ID plus V2 to extract and copy only the face and hair, ensuring that these features are accurately rendered in the final image.

💡Phase Detailer

Phase Detailer is a custom note in ComfyUI that is used to enhance the quality of faces in the generated images. The script describes how it detects faces and performs automatic inpainting to fix any imperfections, which is crucial for rendering small details accurately.

💡Custom Notes

Custom Notes are additional functionalities within ComfyUI that are not part of the base software. The script mentions installing missing custom notes as part of the workflow setup, which are necessary for certain operations like Phase Detailer.

💡Prom

In the context of the video, 'prom' likely refers to 'prompt,' which is the textual description used to guide the AI in generating an image. The script discusses how changing the prompt can alter aspects of the generated character, such as hair color.

💡Seed

A seed in AI image generation is a value that helps produce a deterministic outcome from the model. The script suggests using a fixed seed to save time on rendering, as it allows ComfyUI to reuse previous results instead of recalculating them.

💡Inpainting

Inpainting is a process where missing or damaged parts of an image are filled in or restored. The script describes how the Phase Detailer performs automatic inpainting to fix small faces in the image, which are too small for the model to render correctly initially.

💡Resolution

Resolution refers to the clarity and detail of an image, measured by the number of pixels. The script mentions increasing the resolution of the faces in the image to ensure that they are rendered accurately by the Stable Diffusion model.

Highlights

Introduction to creating a consistent character with Stable Diffusion in ComfyUI.

Providing resource links in the description for downloading the workflow and models.

Instructions to download and load the workflow in ComfyUI.

Addressing missing custom nodes and how to install them.

The necessity of installing ComfyUI Manager for workflow management.

Recommendation to start with the Proto XL model for beginners.

Explanation of how the workflow uses control net candy to fix image composition.

Demonstration of uploading a composition image to the control net preprocessor.

Utilization of IP adapter with phas ID plus V2 for face extraction.

Clarification on the limitations of certain sdxl models with the workflow.

Description of the automatic face fixing process using pH DETeller.

Importance of face detailer for rendering high-resolution faces.

Customization tips, including muting pH DETeller for global composition changes.

How to change the prompt for different hair colors and other features.

Advantages of using both an image and a prompt for independent and complementary image control.

Technique of using a fixed seed in ComfyUI for faster rendering.

Final steps to save the generated high-resolution character image.

Summary of the workflow for creating consistent characters in ComfyUI.

Invitation to like, subscribe, and comment for further video instructions.