How to create consistent character with Stable Diffusion in ComfyUI
TLDRThis tutorial demonstrates how to create a consistent character using Stable Diffusion in ComfyUI. It guides viewers through downloading the workflow, installing missing custom nodes, and utilizing models like Proto XL. The process involves uploading a composition image to Control Net Candy and an IP adapter image to capture facial features. The workflow also includes a face detailer to enhance small faces in the image. Customization tips are provided, and the video concludes with a summary of the steps and a link to a similar workflow for Automatic 1.1.
Takeaways
- 📥 Download the workflow and models from the provided links.
- 🔧 Load the workflow into ComfyUI by dragging and dropping it.
- ⚙️ Install any missing custom nodes using the manager.
- 📦 Download the Proto XL model as it works well with the workflow.
- 🖼️ Use ControlNet to fix the composition of the image.
- 🎨 Upload a composition image to the ControlNet preprocessor.
- 📸 Use IP Adapter to copy the face from another image.
- 🔄 Run the workflow and use the face detailer to fix small faces.
- 🔄 Disable face detailer for faster customization of hair color or background.
- 🖼️ Use both an image and a prompt to control the final output.
Q & A
What is the main topic of the video?
-The video is about creating a consistent character using Stable Diffusion in ComfyUI.
What are the resources mentioned in the video that the viewer can download?
-The viewer can download the workflow, models, and other necessary components for the Stable Diffusion process from the links provided in the description.
What is the first step to follow according to the video?
-The first step is to visit the provided link in the description, download the workflow, and load it in ComfyUI.
What is meant by 'rack boxes' in the video?
-'Rack boxes' refer to missing components in the workflow that need to be installed to complete the setup in ComfyUI.
How does one address the missing custom notes in the workflow?
-To address missing custom notes, one should go to the manager, install the missing custom notes, and restart ComfyUI.
What is the recommended model to start with in the workflow?
-The recommended model to start with is Proto XL, as it has been tested and confirmed to work with the workflow.
What is the purpose of the control net in the workflow?
-The control net is used to fix the composition of the image, allowing the system to extract the outline for consistent character creation.
What is the role of the IP adapter in the workflow?
-The IP adapter is used to copy the face and hair from an image, ensuring that only these elements are transferred without the rest of the image.
What is the function of the pH DETeller in ComfyUI?
-The pH DETeller is a custom note that detects faces in the image and performs automatic inpainting to fix them at a higher resolution.
How can one customize the character in the workflow?
-Customization can be done by changing the prompt and using the IP adapter to control the final image, as well as using a fixed seed for the K sampler to save time.
What is the final step after running the workflow?
-The final step is to right-click and save the high-resolution, fixed character image to your local storage.
Outlines
🎨 Creating Consistent Characters with Comfy UI and Stable Diffusion
The speaker introduces a tutorial on crafting consistent characters using Stable Diffusion within Comfy UI. They guide viewers to download necessary workflows and models from provided links in the description, emphasizing the need to install any missing custom nodes through the Comfy UI manager. The process involves uploading a composition image to a control net preprocessor to establish the character's outline, which is then used in conjunction with an IP adapter to transfer specific facial features. The workflow is tested with the Proto XL model, and the speaker promises a detailed explanation of its operation.
🖼️ Enhancing Image Details with Phase Detailer in Comfy UI
This paragraph delves into the use of the Phase Detailer, a feature within Comfy UI's Impact Pack, which serves a similar function to the face-fixing tool in Automatic1111. The Phase Detailer detects and automatically in-paints faces in images, improving the resolution and detail of small faces that Stable Diffusion might not render accurately at first. The speaker demonstrates how to use this tool in conjunction with the control net and IP adapter, and provides tips on customizing the workflow by adjusting prompts and muting the Phase Detailer for faster iterations.
🔧 Customizing and Finalizing the Workflow in Comfy UI
The final paragraph focuses on customizing the workflow by changing prompts and selectively using the Phase Detailer. The speaker explains how to control the final image with both an IP adapter image and a custom prompt, highlighting the independence and complementarity of these two conditioning methods. They also discuss a trick to save rendering time by using a fixed seed in Comfy UI, which reuses the cached result if the seed hasn't changed. The tutorial concludes with a summary of the workflow and an invitation for viewers to explore a similar workflow for Automatic1111, with links provided in the description.
Mindmap
Keywords
💡Stable Diffusion
💡ComfyUI
💡Workflow
💡ControlNet
💡IP Adapter
💡Phase Detailer
💡Custom Notes
💡Prom
💡Seed
💡Inpainting
💡Resolution
Highlights
Introduction to creating a consistent character with Stable Diffusion in ComfyUI.
Providing resource links in the description for downloading the workflow and models.
Instructions to download and load the workflow in ComfyUI.
Addressing missing custom nodes and how to install them.
The necessity of installing ComfyUI Manager for workflow management.
Recommendation to start with the Proto XL model for beginners.
Explanation of how the workflow uses control net candy to fix image composition.
Demonstration of uploading a composition image to the control net preprocessor.
Utilization of IP adapter with phas ID plus V2 for face extraction.
Clarification on the limitations of certain sdxl models with the workflow.
Description of the automatic face fixing process using pH DETeller.
Importance of face detailer for rendering high-resolution faces.
Customization tips, including muting pH DETeller for global composition changes.
How to change the prompt for different hair colors and other features.
Advantages of using both an image and a prompt for independent and complementary image control.
Technique of using a fixed seed in ComfyUI for faster rendering.
Final steps to save the generated high-resolution character image.
Summary of the workflow for creating consistent characters in ComfyUI.
Invitation to like, subscribe, and comment for further video instructions.