Stable Diffusion 3 via API in comfyUI with Stability AI official nodes - SD Experimental

Andrea Baioni
19 Apr 202420:14

TLDRIn this video, Andrea Baioni guides viewers on how to use Stable Diffusion 3 (SD3) via API key in ComfyUI, a process that requires purchasing credits due to SD3 not being freely available as a checkpoint. The video demonstrates installing necessary nodes from Stability AI's GitHub, setting up API keys, and generating images with various nodes such as stability image core, SD3, remove background, creative upscale, outpainting, inpainting, and research and replace. Baioni also discusses the pricing of credits for different models and shares his first impressions of the generated images, comparing the results of the core model and SD3. The video concludes with a humorous note on replacing a person with a giant cat using the search and replace node and an invitation for viewers to suggest prompts for further testing.


  • πŸ“ˆ Stable Diffusion 3 (SD3) is available for use via API key but not as a free checkpoint, requiring the purchase of credits for image generation.
  • πŸ’΅ The cost for generating an image with SD3 is approximately 6 cents of a US dollar, with the option to wait for a future free checkpoint release.
  • πŸ”‘ To use SD3 with ComfyUI, one must set up an API key obtained from the Stability AI account and input it into each relevant node.
  • πŸ“š Stability AI provides a workflow and nodes on their GitHub for ComfyUI, which was recently updated and includes a demonstration image.
  • πŸ“¦ Missing nodes in ComfyUI can be installed through the manager, after which a restart of ComfyUI is required to apply the changes.
  • πŸ” Each Stability API node in ComfyUI has an 'API key override' field where the user's Stability AI API key must be entered.
  • 🎨 Nodes available for SD3 in ComfyUI include image core, SD3, remove background, creative upscale, outpainting, inpainting, and research and replace.
  • πŸ‘— A demonstration of generating an image of a young woman in Miu Miu haute couture using positive prompts resulted in varying levels of accuracy between the Core and SD3 models.
  • πŸ”„ The creative upscale node showed impressive detail and correction of anatomical features when used on the generated image.
  • πŸ–ΌοΈ Outpainting expanded the image while maintaining perspective and ambience, though an error occurred when the upscaled image was too large for processing.
  • βœ… Inpainting allowed for changes such as a quick change of clothes and model, although there were some issues with hair and background elements.
  • πŸ” The search and replace node successfully substituted a person with a cat in an image, demonstrating the node's functionality with a humorous result.
  • 🚫 The remove background node could not be tested due to a lack of an input field for the API key in the provided workflow.

Q & A

  • What is the current status of Stable Diffusion 3 (SD3) in terms of availability and cost?

    -Stable Diffusion 3 has been released for use with API keys and is not available as a free checkpoint. It requires the purchase of credits for image generation, costing around 6 cents of a US dollar per image.

  • How can one obtain Stability AI API keys and credits?

    -To obtain Stability AI API keys and credits, one needs to visit the Stability AI documentation page, create an account or log in, and then navigate to the account page to buy credits and reveal the API keys.

  • What is the cost of using SD3 and SD3 turbo via API calls?

    -Using SD3 via API calls costs 6.5 credits per call, while SD3 turbo costs 4 credits per call. The cost of credits is $10 for a thousand credits.

  • How does one install and use the Stability AI nodes in ComfyUI?

    -To install and use Stability AI nodes in ComfyUI, one should visit the Stability AI GitHub page for ComfyUI SAI API, drag and drop the provided image into the ComfyUI workspace, install the missing custom nodes from the manager, and restart ComfyUI. After restarting, input the API key into each node's API key override field.

  • What are the different nodes available for use with SD3 in ComfyUI?

    -The available nodes include stability image core, stability SD3, preview image nodes, stability remove background, stability creative upscale, stability outpainting, and stability inpainting. There is also a stability research and replace node.

  • How does the output format in the SD3 node work, and what was the issue encountered?

    -The output format in the SD3 node should allow the selection of PNG or Jpeg. However, an issue was encountered where the format was incorrectly set to 16:9, which should have been in the aspect ratio field. The correct input was to set the strength to 0.8 and the aspect ratio to 16:9.

  • What was the result of the first image generation using the core and SD3 models with a positive prompt?

    -The first image generation resulted in two images. The core model translated the environment settings into the clothing and general mood, but the clothing was not quite Miu Miu. The SD3 model, however, produced clothing more similar to Miu Miu and included a stronger representation of the prompt elements, such as the earrings and skylight.

  • What was the outcome of using the stability creative upscale node?

    -The stability creative upscale node produced an image with amazing detail, corrected anatomy, and impressive lighting. The intricate details of the dress and environment were stunning, although there was a minor issue with the shadow.

  • What issues were encountered when trying to use the stability outpainting node?

    -Initially, the stability outpainting node changed the perspective significantly and may have been too aggressive due to the large outpainting value. Later, an error occurred because the upscaled image was too large for the outpainting node to handle. Adjusting the outpainting values and linking the SD3 output directly to the outpainting node resolved the issue.

  • How did the stability inpainting node perform when asked to change the model and clothes in an image?

    -The stability inpainting node successfully changed the clothes and model in the image. However, it left some remnants from the previous image, such as the hair transforming into a crown and some blonde hair in the background, indicating potential masking or model limitations.

  • What was the result of using the stability search and replace node to replace a person with a cat?

    -The stability search and replace node replaced the person with a giant cat, maintaining the perspective and lighting of the original image. Although not perfect, the result was humorous and demonstrated the node's capability.

  • Why was the remove background node not functional in this demonstration?

    -The remove background node was not functional because it kept asking for an API key, but there was no field provided to input it. A possible workaround might involve creating a text file within the folder holding the API keys, but this was not attempted to avoid potential issues.



πŸ“ Introduction to Stable Diffusion 3 and CompUI API Key Setup

The video begins with an introduction to Stable Diffusion 3 (SD3), noting that it is available for use via API keys but not yet as a free checkpoint. The host explains that using SD3 currently requires purchasing credits, costing approximately 6 cents per image generation. The video outlines the process of setting up a workflow in CompUI to use SD3 with API keys. It also mentions that Stability AI has released a workflow and nodes on their GitHub for ComfyUI, which the host prefers to use. The host guides viewers on how to install missing custom nodes and input their Stability AI API key for each node to generate images.


πŸ’³ Purchasing Credits and API Key Management

The host details how to purchase additional credits for Stability AI SD3 on the Stability AI website. They explain that users start with 25 free credits and can buy more in increments of $10 for a thousand credits. The host then demonstrates how to access and copy the API key from the Stability AI account page. Each node in the CompUI workflow requires the API key to be manually inputted. The host also discusses the different models available for pricing, such as SD3, SD3 Turbo, and Core, with SD3 Turbo being a more cost-effective option.


πŸ–ΌοΈ Generating Images with Core and SD3 Models

The host proceeds to generate images using the Core and SD3 models within CompUI. They input a positive prompt for a fashion scene involving a young woman in Miu Miu haute couture and compare the results from both models. The SD3 model produces an image that more closely resembles the desired Miu Miu aesthetic. The host also discusses the credits used for image generation and provides a brief analysis of the generated images, noting the influence of the Baroque setting on the clothing and the presence of a skylight in the SD3-generated image.


🎨 Exploring Additional Features: Upscaling, Outpainting, and Inpainting

The host explores additional features of the Stability AI nodes in CompUI, including creative upscaling, outpainting, and inpainting. They activate the upscaling node and achieve impressive results, noting the corrected anatomy and detailed lighting. When testing outpainting, the host encounters an error due to an oversized payload but successfully generates an outpainted image after adjusting the settings. The inpainting node is used to change the subject of an image, and while there are some issues with hair and background elements, the overall result is satisfactory. The host also attempts to use the search and replace feature but encounters an error due to an incorrect output format setting in the active prompt field.

πŸš€ Conclusion and Future Testing

The host concludes the video by summarizing the capabilities of the Stability AI nodes in CompUI. They mention that they were unable to get the remove background node working due to an API key input issue but do not attempt a workaround. The host offers to test prompts provided by viewers and shares links to view the generated images. They sign off, providing their social media and web presence information and leaving the audience with a slideshow of the SD3 images they generated during the video.



πŸ’‘Stable Diffusion 3

Stable Diffusion 3 (SD3) is an advanced version of the AI image generation model by Stability AI. It is used to create images from textual descriptions and is a significant upgrade from its predecessors. In the video, SD3 is used to generate images with different settings and features, showcasing its capabilities in various modes such as core, SD3, and SD3 turbo.

πŸ’‘API key

An API key is a unique code that allows users to access and use the features of an API (Application Programming Interface). In the context of the video, an API key is required to use the SD3 model for image generation. The user must purchase credits to use the API key for generating images with SD3.


ComfyUI is a user interface that allows users to interact with complex systems in a more comfortable and intuitive way. In the video, ComfyUI is used as the platform to integrate and use the SD3 model, allowing the user to generate images without relying on external web interfaces.

πŸ’‘Image generation

Image generation refers to the process of creating images from textual descriptions using AI models. It is the core functionality demonstrated in the video, where the user inputs a description, and the AI generates corresponding images using the SD3 model.


In the context of the video, credits are a form of virtual currency used within the Stability AI platform to pay for the API calls made to generate images with SD3. Each image generation request consumes a certain number of credits, which the user must purchase.

πŸ’‘Positive prompt

A positive prompt is a textual description that guides the AI to include specific elements or characteristics in the generated image. It is a key input in the image generation process, as demonstrated when the user inputs a description of a scene to generate an image.

πŸ’‘Negative prompt

A negative prompt is a textual instruction that tells the AI to avoid including certain elements or characteristics in the generated image. In the video, the user leaves the negative prompt field empty to see how the AI interprets just the positive prompt.


Upscaling is the process of increasing the resolution or detail of an image. In the video, the user uses the upscaling feature to enhance the quality of the generated images, making them more detailed and higher resolution.


Outpainting is a technique used in AI image generation where the AI extends the image beyond its original borders, creating new content that matches the style and theme of the original image. The user in the video experiments with outpainting to expand the scene depicted in the generated image.


Inpainting is the process of filling in missing or damaged parts of an image with AI-generated content that seamlessly matches the surrounding areas. In the video, the user uses inpainting to change specific parts of an image, such as altering the clothing and model in a fashion scene.

πŸ’‘Search and Replace

Search and Replace is a feature that allows users to identify a specific element within an image and replace it with something else. In the video, the user attempts to use this feature to replace a person in an image with a cat, resulting in a humorous and unexpected outcome.


Stable Diffusion 3 (SD3) can be used with API keys but is not free to use.

Using SD3 requires purchasing credits at a cost of around 6 cents USD per image generation.

Stability AI has released a workflow and nodes for ComfyUI, which can be used with SD3.

To use SD3 with ComfyUI, you need to install missing custom nodes and restart the application.

Each node in the workflow requires an API key override to be entered manually.

Stability AI provides a variety of nodes for image manipulation, including core, SD3, remove background, creative upscale, outpainting, and inpainting.

The API key is used for authentication and is necessary for generating images with SD3.

Credits for SD3 can be purchased on the Stability AI website, starting at $10 for 1,000 credits.

The core model and SD3 model offer different levels of detail and refinement in image generation.

Positive and negative prompts can be used to guide the image generation process.

The output format and aspect ratio can be adjusted within the nodes for customization.

SD3 turbo is a cost-effective alternative to SD3, using fewer credits per API call.

The generated images from SD3 closely resemble the Miu Miu fashion style and the baroque setting.

Stability creative upscale enhances image quality and corrects anatomical details.

Outpainting can expand images to the sides, maintaining the original perspective and ambience.

Inpainting allows for changes in the image, such as altering clothing or models, with some limitations.

Search and replace functionality can substitute objects within an image, as demonstrated by replacing a person with a cat.

Users can test SD3 with the initial 25 free credits provided by Stability AI.

ComfyUI provides a seamless workflow for using SD3 without relying on external web interfaces.