Civitai AI Video & Animation // Updated Tiled IPAdapter w/ Tyler // 3.28.24

29 Mar 202468:46

TLDRIn this lively and informative stream, Tyler, the host, dives into the world of AI video and animation, focusing on the recent updates to the Tiled IPAdapter. He discusses the challenges and solutions encountered due to the update, which initially disrupted many users' workflows. Tyler shares his experiences updating his systems and provides guidance for viewers on how to adapt their own workflows using the new IPAdapter versions available on his Civitai profile. He also highlights the importance of testing different weight types in the IPAdapter for varied outputs. The stream is interactive, with viewers submitting images and ideas for testing the AI's capabilities, leading to creative and sometimes humorous results. Tyler also previews an upcoming tutorial on integrating Blender with AI, promising a valuable learning experience for the audience.


  • πŸ“š The IP adapter received a major update which initially broke workflows for many users but has now been updated to be more modular and customizable.
  • πŸ”„ Tyler updated all his workflows to be compatible with the new IP adapter and made them available on his Civitai profile for download.
  • πŸ’» Users experienced issues with the 'apply IP adapter from encoded' node, which has been restructured for better control and variable options.
  • πŸŽ₯ The new IP adapter allows for higher resolution and detailed outputs, especially when using square images.
  • πŸš€ The tiled IP adapter feature, despite initial bugs, has shown promising results in video upscaling and background isolation.
  • πŸ€– The use of different weight types in the IP adapter can drastically change the output, emphasizing the need for testing various settings.
  • πŸ“‰ The tiled IP adapter is more VRAM-intensive, suggesting that users with lower VRAM might need to adjust their workflow settings.
  • πŸ” Tyler found that using the 'animate diff square root' motion model with the tiled IP adapter improved the quality of animations.
  • 🌐 The workflow Tyler demonstrated utilizes the Photon LCM model, which is a combination of the Photon 1.0 model and LCM, retrained for better performance.
  • ⏰ The next stream will feature a guest creator, Enigmatic E, who will showcase how to use Blender with Animate Diff and Comfy UI.
  • πŸ“ˆ Tyler encourages viewers to follow Civitai on social media, create accounts, download resources, and share their creations.

Q & A

  • What is the main topic of discussion in the video?

    -The main topic of discussion is the recent update to the IP adapter and its impact on AI video and animation workflows.

  • Who is the host of the Civitai AI Video & Animation stream?

    -The host of the stream is Tyler.

  • What was the issue with the IP adapter update?

    -The IP adapter update caused significant disruptions to the workflows of users, effectively breaking them and requiring adjustments and updates to restore functionality.

  • What does Tyler demonstrate in the stream?

    -Tyler demonstrates the updated workflows on his Civitai profile, showcasing how to use the new IP adapter versions for various video creation processes.

  • What is the significance of the tiled IP adapter?

    -The tiled IP adapter is a new feature that allows for more modular control and additional variables in the IP adapter suite, which can enhance the quality and control of the video outputs.

  • How does the weight type in the IP adapter affect the output?

    -The weight type in the IP adapter significantly influences the output, altering the visual results and the way the images are processed and integrated into the videos.

  • What is the role of the 'apply IP adapter from encoded' node in the workflows?

    -The 'apply IP adapter from encoded' node is used to plug in the images into the model, playing a crucial role in how the images are utilized within the workflows.

  • Why did Tyler update all his workflows on the first day of the IP adapter update?

    -Tyler updated all his workflows to ensure that everyone who was using them could continue making videos without disruption and to restore the workflows to their previous state.

  • What is the importance of testing different weight types in the IP adapter?

    -Testing different weight types is important because it allows users to find the best settings for their specific needs, leading to more satisfactory and desired video outputs.

  • What is the VRAM usage of the workflows discussed in the stream?

    -The VRAM usage is a concern, especially with the tiled IP adapter, which may not be as low VRAM friendly. The stream discusses strategies to manage VRAM usage, such as adjusting the batch size and resolution.

  • What is the significance of using square images with the IP adapter?

    -Using square images is preferred because the tile node in the IP adapter provides a more accurate preview and doesn't break up the image in a weird split, which can affect the output quality.



πŸ˜€ Introduction and IP Adapter Update Discussion

Tyler, the host, welcomes viewers to the AI video and animation stream. He discusses recent updates to the IP adapter that have caused disruptions in workflows. Despite the update breaking many processes, Tyler managed to update all his workflows and made them available on his Civy profile. He also thanks Mid, journeyman, and Super Beast for their help and announces them as special guest creators for the next stream.


πŸ” Deep Dive into IP Adapter Weight Types

The stream focuses on the different weight types in the IP adapter and their significant impact on output results. Tyler demonstrates how varying weight types can drastically change the outcome of the AI's video generation. He advises viewers to test different weight types to achieve desired results and shares his favorites for consistent results.


πŸ“ˆ Testing the IP Adapter Tiled Node

Tyler discusses a new workflow with the IP adapter tiled node and shares his positive preliminary results. He mentions the importance of testing and shares a workflow that uses less VRM, which is beneficial for lower-end hardware. The audience is encouraged to submit images and backgrounds for testing, and there's a brief discussion about the ideal image aspect ratios for the tiled node.


🎨 Customizing the AI Video Generation Process

The host talks about customizing the AI video generation process using different models and settings. He uses the Paradigm LCM model and discusses the importance of the depth preprocessor and control net for smooth animations. There's also a conversation about reducing VRM usage by adjusting the batch size and image resolution.


πŸ€” Strategies for Low VRM Usage

Tyler and the team explore strategies to optimize the workflow for low VRM usage. They discuss the use of batch size nodes and the potential for using different image aspect ratios. The conversation also touches on the possibility of disabling certain features to reduce processing time and VRM usage.


πŸ˜‚ Creative Experimentation and Audience Interaction

The stream takes a fun turn as Tyler and the team experiment with various creative ideas, turning characters into different forms like a '90s anime waifu, a clay sculpture, and even Kermit the Frog. They also engage with the audience, taking their suggestions for backgrounds and character transformations, resulting in humorous and unexpected AI-generated outputs.


🌟 Showcasing Successful Transformations

Tyler showcases some successful transformations achieved using the AI video generation process. He highlights the importance of using clear and literal terms in the prompt to guide the AI towards the desired output. The host also discusses the process of creating loops in videos and the potential for using different motion models to achieve distinct animation effects.


πŸš€ Upcoming Guest and Future Streams

The host teases an upcoming stream featuring enigmatic e, who will demonstrate the use of Blender with animate diff and comfy UI. Tyler expresses excitement about integrating Blender into his workflow and encourages viewers to follow enigmatic e on social media. The stream concludes with a reminder for viewers to join the next session and engage with the community.



πŸ’‘IP Adapter

The IP Adapter is a technical tool used in the process of AI video and animation creation. It plays a crucial role in integrating user-provided images with the AI model to generate specific outputs. In the video, Tyler discusses an update to the IP Adapter that caused significant changes in the workflow for those using it, requiring adjustments and reconfigurations to maintain the desired video outputs.

πŸ’‘AI Video and Animation

AI Video and Animation refers to the use of artificial intelligence to create or animate videos. In the context of the video, this involves using AI models and tools to transform images and videos into different styles or settings, often with the help of user inputs. The video demonstrates how these AI technologies can be used to create unique and engaging content.


A workflow in the video script refers to the sequence of steps or processes that Tyler and his team follow to create AI video animations. The workflow includes uploading images, using specific AI models, and adjusting various settings to achieve the desired outcome. The script mentions that the IP Adapter update affected these workflows, requiring updates and reconfigurations.

πŸ’‘Twitch Stream

A Twitch Stream is a live broadcasting platform where content creators can interact with their audience in real-time. Tyler uses Twitch to host his AI video and animation streams, where he demonstrates the process of creating AI animations and discusses technical aspects with his viewers. The script indicates that the stream has a significant viewership and is popular within the art category on Twitch.


VRAM, or Video Random Access Memory, is the memory used by graphics cards to store image data for rendering images, videos, and animations. In the script, Tyler discusses the VRAM usage in relation to the IP Adapter and the AI video creation process, noting that certain configurations can be more VRAM-intensive and affect the performance of the system.

πŸ’‘Control Net

The Control Net is a component in the AI video creation process that helps in smoothing out the animations and reducing artifacts or 'craziness' in the generated content. It contributes to the stability and quality of the final video output. Tyler mentions using the Control Net at a specific setting to capture movements effectively.

πŸ’‘Machine Learning

Machine Learning is a type of artificial intelligence that allows systems to learn and improve from experience without being explicitly programmed. In the context of the video, Machine Learning is used to train models like the Photon LCM, which then generate the AI animations based on user inputs and learned patterns.

πŸ’‘Batch Size Manager

The Batch Size Manager is a tool mentioned in the script that helps manage the number of frames processed at a time in the AI video creation workflow. Reducing the batch size can help in lowering VRAM usage, making the process more feasible for systems with limited graphics memory.

πŸ’‘Stable Diffusion

Stable Diffusion is an underlying technology or framework that supports the creation of AI animations. It is used as the backbone for the Comfy UI, which Tyler and his team use to create their videos. The script implies that Stable Diffusion is a robust and flexible platform for generating a wide range of video outputs.


Discord is a communication platform used by the community in the video for real-time chat and collaboration. Tyler mentions that he and other users were working together on Discord to troubleshoot and optimize the new IP Adapter configurations, highlighting the collaborative nature of the process.

πŸ’‘Video Loader

A Video Loader in the context of the video script is a component of the workflow that allows for the input of video files into the AI system. It is essential for the process of creating AI animations as it enables the user to load the source material that will be manipulated or transformed by the AI model.


Tyler discusses a major update to the IP adapter that has affected AI video and animation workflows.

The update initially broke many workflows, but Tyler has updated all his streams' workflows to adapt to the changes.

IP adapter updates now offer more modularity and control, with new variables introduced.

Tyler shares his Civitai profile where viewers can download the updated workflows for various models.

The new IP adapter versions are marked with 'Updated IP Adapter' for easy identification.

Special thanks to Mid, journeyman, and Super Beast for helping with the new IP adapter integration.

An image upscaling workflow will be showcased by the guest creators on the next Friday's stream.

Different weight types in the IP adapter can drastically change the outputs.

Tyler demonstrates the workflow for subject and background isolation using an invert mask.

The tiled IP adapter is shown to produce high-quality results, especially with square images.

VRAM usage is a concern with the new tiled IP adapter, and solutions to manage it are discussed.

The tiled IP adapter is not low VRAM friendly, contrary to the initial LCM model.

Batch size management is introduced to help with VRAM issues, allowing processing of smaller chunks of the video.

Tyler experiments with different prompts and images, showcasing the versatility of the new workflows.

The use of a 'pingpong' option in the video combine nodes is mentioned for creating perfect loops.

An upcoming workflow for SXL is teased, which is effective for texturing rather than people.

The success of Kermit the Frog in the AI animation is celebrated, highlighting the AI's ability to handle complex subjects.

Tyler emphasizes the importance of using the right prompt to guide the AI towards the desired output.

The potential integration of Blender with animate diff is discussed for future streams.