Civitai AI Video & Animation // Motion Brush Img2Vid Workflow! w/ Tyler
TLDRIn this engaging live stream, Tyler from Civitai AI Video & Animation shares an exciting workflow for animating images using a motion brush in Comfy UI with an anime diff. The process involves selecting specific parts of images to animate, such as eyes, hair, and clothing, to bring them to life with a sense of motion. Tyler demonstrates the workflow using various images submitted by the Discord community, showcasing the potential for customization and creative expression. He also emphasizes the importance of selecting the right motion layer and experimenting with different settings to achieve the desired animation effects. The stream includes a shout-out to VK, the creator of the workflow, and encourages viewers to explore and share their creations using the provided hashtag. The session concludes with a teaser for the next guest creator stream featuring Noah Miller, discussing AI animation evolution and his sci-fi film project.
Takeaways
- 🎨 **Motion Brush Workflow**: Tyler demonstrates a workflow that uses a motion brush in Comfy UI with anime diff to animate specific parts of images.
- 🖼️ **Image to Animation**: The process involves taking still images and bringing them to life through selective animation of certain areas.
- 🤖 **AI Models**: Tyler mentions using the SD 1.5 Clip Vision model and IP adapter plus 1.5 model for the workflow.
- 📈 **Iterative Process**: The workflow requires a lot of iteration and generation to find the right motion layers for the best results.
- 🌟 **VK's Contribution**: The workflow was created by VK, who allowed Tyler to share it with the community. VK can be found on Instagram @v.AMV.
- 📱 **Discord Interaction**: Tyler invites viewers from the Discord community to submit images for animation during the stream.
- 🎓 **Learning from Others**: Tyler discusses learning from Spencer's guest Creator stream and experimenting with audio reactive workflows.
- 📉 **Low VRAM Friendly**: The workflow is designed to be friendly for systems with lower VRAM, making it accessible for a wider range of users.
- 🎥 **Animation Output**: The final output is an animated version of the still image, with animated elements such as blinking eyes, moving hair, and other selected parts.
- 📸 **High-Quality Images**: Tyler emphasizes the importance of starting with a high-quality image to achieve the best results in the animation.
- 🔄 **Upscaling and Artifacts**: Upscaling the low-resolution animation can help clean up artifacts and improve the final output's clarity.
Q & A
What is the main topic of the video?
-The main topic of the video is a workflow demonstration for animating images using a motion brush in a specific software environment, focusing on anime-style animations.
Who is the presenter of the video?
-The presenter of the video is Tyler, who is hosting a live stream on Civitai AI Video and Animation.
What is the purpose of using a motion brush in the workflow?
-The purpose of using a motion brush in the workflow is to bring specific parts of images to life, particularly focusing on anime-style animations.
What is the significance of the 'control net' in the workflow?
-The 'control net' is used to smooth out animations and temper saturation, which is crucial for achieving more natural and less fragmented motion in the final output.
What is the role of the 'IP adapter' in the workflow?
-The 'IP adapter' is used to ensure a cleaner input by adjusting the weight, which helps to avoid burning the image in too hard and maintaining the desired look for the animation.
How does the 'frame rate' affect the animation?
-The 'frame rate' determines the number of frames per second in the animation. A higher frame rate, achieved through interpolation, makes the animation smoother.
What is the importance of selecting the right 'motion layer'?
-Selecting the right 'motion layer' is crucial as it directly influences the type of motion that is applied to the image, affecting the final animation's style and fluidity.
Why is the 'VK motion brush' workflow considered low VRAM friendly?
-The 'VK motion brush' workflow is considered low VRAM friendly because it only uses about 9 gigs of VRAM, making it suitable for systems with lower VRAM capabilities.
How can viewers participate in the live stream?
-Viewers can participate by sending images they would like to see animated in the chat, and the presenter, Tyler, may attempt to animate those images during the stream.
What is the process for painting the key parts for animation?
-The process involves using a mask editor to paint the key parts of the image that the user wants to animate, such as eyes, hair, or other specific features.
What is the name of the person who created the workflow and how can viewers find them?
-The workflow was created by someone named VK, who can be found on Instagram at the handle v.AMV.
What is the recommended way to upscale the final animation to improve quality?
-The recommended way to upscale the final animation is to right-click on the node in the workflow and select 'bypass' to access upscaling options.
Outlines
🎨 Introduction to the AI Video and Animation Stream
Tyler, the host, welcomes viewers to the AI video and animation stream, expressing excitement about sharing a new workflow involving motion brushes in Comfy UI with anime diff. He invites Discord users to submit images for animation. The session is intended to be relaxed, with no dance videos or complex tasks. Tyler discusses a recent guest stream with Spencer, which provided an in-depth learning experience. The current image on screen, a dripping head, is a transformed version of an initial image, demonstrating the workflow's capabilities. The stream will focus on low-resolution images for speed, with the understanding that upscaling will improve the final output. Tyler credits VK, the creator of the workflow, and plans to share it on his profile after the stream, once he has permission.
📝 Setting Up the Workflow and Community Contributions
The workflow setup begins with ensuring the correct clip vision and IP adapter models are in place. Tyler explains the use of the IP adapter advanced node, the Laura loader, and the control net, specifically the control GIF animate diff, which aids in smoothing animations. He provides a link to download the control net for those who may not have it. The animation process involves selecting between two checkpoints, Boton LCM for general use and Every Journey LCM for anime-based animations. Tyler emphasizes the importance of the frame count and the standard EMA settings. He also discusses the transition from using the Hello 2D model to other models to keep the content fresh and unique.
🖌️ Painting Key Animation Parts Using the Mask Editor
Tyler demonstrates how to use the mask editor to paint key parts of the image that will be animated. He shows how to adjust the thickness of the paint and how the animation will be more pronounced in the painted areas. The process is interactive, with Tyler asking the audience for opinions on which parts to animate. He also shows how to erase mistakes using the alt key. The segment includes a shout-out to pitp, a contributor to the stream, and a discussion about the workflow's efficiency and the choice of resolution for generating images.
🤖 Adjusting Motion Intensity and Smoothing Animations
The paragraph discusses how to control the intensity of the motion in the animation. Tyler explains the use of the 'grow mask with blur' node, which expands and blurs the mask for a smoother transition. He also talks about inverting the mask for different effects. The paragraph includes tips from VK on adjusting brightness and handling noise in the animation. Tyler showcases the results of the animation, emphasizing the impact on the eyes and how different motion layers can affect the outcome. He also discusses the technical aspects, such as VRAM usage and the option to interpolate for smoother animations.
🔄 Iterating Through the Workflow with Different Images
Tyler iterates through the workflow using different images submitted by the audience. He demonstrates the process of painting the desired animated areas on each image and how to adjust the workflow for the best results. The segment includes attempts with various 'motion lauras' and the use of different checkpoints for anime-style animations. Tyler also discusses the importance of a high-quality base image for better output and the potential for creating a Civetta badge using the workflow.
🎭 Adding More Dynamic Motions and Experimenting with Different Effects
The paragraph covers the process of adding more dynamic motions to the animations. Tyler experiments with different motion lauras and describes the visual effects achieved with each. He also discusses the importance of the motion scale in anime diff and how it controls the amount of motion in the animation. The segment includes a variety of images, from a monster with dripping goo to a girl with an aura, demonstrating the versatility of the workflow. Tyler also mentions the upcoming guest, Phil, and teases future streams.
🌊 Surfing Waves and Detailing the Workflow's Capabilities
Tyler focuses on a specific image of a girl surfing and discusses the use of motion lauras to create the illusion of waves. He emphasizes the importance of providing clear prompts and using high-quality images for better results. The paragraph also includes a discussion about the motion laura trained by Palm, a friend of Tyler's, and the impact of the motion laura's training on the final output. Tyler encourages viewers to experiment with different elements to enhance their animations.
🔧 Finalizing the Workflow and Preparing for Upload
The final paragraph covers the steps Tyler takes to finalize the workflow and prepare it for upload. He discusses the process of exporting the workflow, removing unnecessary elements, and saving it as a JSON file. Tyler also mentions his equipment preferences, specifically the XP-Pen Pro 24 for pen input. The paragraph concludes with Tyler's intention to upload the workflow during the stream and share the link with the viewers. He also talks about the importance of the open-source community and the willingness of its members to share their work.
📚 Sharing the Workflow and Upcoming Streams
Tyler shares the workflow with the audience, providing instructions on how to download and use it. He also discusses the importance of tagging and categorizing the workflow properly for easy access. The paragraph includes a shout-out to VK, the original creator of the workflow, and encourages viewers to follow him on Instagram. Tyler also talks about an Instagram post featuring a Civetta song and encourages viewers to remix and share it using a specific hashtag. He concludes by previewing upcoming streams, including a special guest creator stream with Noah Miller and a discussion on AI in animation.
Mindmap
Keywords
💡Motion Brush
💡Anime Diff
💡Comfortable UI (Comfy UI)
💡IP Adapter
💡Control Net
💡Checkpoints
💡VRAM
💡Interpolation
💡Mask Editor
💡Upscaling
💡Workflow
Highlights
Tyler shares a new workflow for animating specific parts of images using a motion brush in comfy UI with anime diff.
The workflow was created by VK and Tyler has received permission to share it with the community.
The process involves using the IP adapter and clip Vision model, with a focus on low VRAM usage for broader accessibility.
A control net, specifically the 'control GIF animate diff', is used to smooth out animations and adjust saturation.
Two checkpoints are used for the workflow: Boton LCM for general purposes and Every Journey LCM for anime-based animations.
The animation process can be finicky, requiring multiple iterations to find the right motion layers.
The motion scale of anime diff can be adjusted to control the intensity of the motion in the animations.
Interpolation is used to smooth out the animations, achieving a frame rate of 30 FPS.
The entire workflow is demonstrated with various images, showcasing the flexibility and customization options.
The use of motion layers, such as 'liquid' and 'rushing waterfall', can drastically change the animation's character.
VK's workflow is showcased as low VRAM friendly, making it suitable for users with lower-end graphics cards.
The importance of high-quality input images for better output results is emphasized, especially when upscaling.
The workflow is suitable for creating animations with a cartoonish look, differentiating it from more realistic renderings.
Tyler discusses the use of different motion layers and how they affect the final animation's appearance.
The process is interactive, with viewers submitting images to be animated live during the stream.
The final workflow will be made available on Civitai for others to use and experiment with.
Tyler emphasizes the importance of experimentation and iterating on the workflow to achieve the desired results.
The stream concludes with a discussion about the open-source community and the benefits of sharing knowledge and resources.