Combining Blender 3D with AI ft. Enigmatic E // Civitai Guest Stream

30 Mar 2024108:06

TLDRIn this engaging live stream, the host invites special guest Enigmatic E to demonstrate the fusion of 3D modeling and AI technology. They discuss the evolution of AI in video creation and showcase a workflow using Blender, a free 3D creation suite, to create a character model. The process involves using Luma Labs' Genie for text-to-3D generation, Mixo for animations, and Blender for refining the model and setting up the scene. The stream highlights the potential for creators to achieve high-quality, stylized animations suitable for platforms like Instagram by leveraging AI and 3D software. The host and guest also touch upon the use of masks for post-production and the impact of training models like LaMBA on the final render's detail and color. The session is both educational and inspirational, encouraging viewers to experiment with these tools and push the boundaries of digital creation.


  • 🎥 The stream is a guest session featuring Enigmatic E, who is an expert in 3D and AI, particularly in creating AI videos.
  • 🚀 Enigmatic E has a background in videography, motion graphics, pixel art, and animation, which fuels his interest in combining 3D with AI.
  • 💡 The benefit of using 3D videos in AI applications like Comfy UI is the ability to control the camera and avoid the need for complex masking or rotoscoping.
  • 🛠️ Genie by Luma Labs is introduced as a text-to-3D generator that can create 3D models from textual descriptions, which can then be used in AI applications.
  • 🌟 Mixo is mentioned as a platform where one can find animations for 3D models, although they may become recognizable due to their free availability.
  • 📚 The process of creating 3D models for AI involves downloading models, modifying them in Blender, and preparing them for animation.
  • 🖥️ Blender is a powerful, free tool that is used to manipulate 3D models, and it's recommended for those following along to pause and follow the tutorial at their own pace.
  • ⚙️ The tutorial covers how to use Blender to create a symmetrical 3D model, which is essential for proper rigging and animation.
  • 🎨 The importance of having a clean 3D model with separate fingers and toes is emphasized for better AI interpretation and animation quality.
  • 🔄 The process of mirroring 3D models in Blender to ensure symmetry and detail is demonstrated, which is crucial for the animation's success.
  • 🧩 The final animation is created by combining different animations, like skateboarding and jumping, to produce a more complex and engaging result.

Q & A

  • What is the main topic of the guest stream?

    -The main topic of the guest stream is combining Blender 3D with AI, featuring a special guest, Enigmatic E, who shares his expertise on creating AI videos and 3D modeling.

  • What is the significance of using a 3D video in AI applications like Comfy UI?

    -Using a 3D video in AI applications like Comfy UI allows for better control over the final render, such as removing backgrounds without the need for rotoscoping or complex masking.

  • How does the Genie tool from Luma Labs help in the 3D modeling process?

    -Genie is a text-to-3D generator that allows users to create 3D models by simply describing what they want, which can then be used in AI applications for more detailed and controlled animations.

  • What is the role of Mixo in the animation process discussed in the stream?

    -Mixo is a platform where users can find and use various character animations. It is used in the process to animate the 3D models created with Blender before they are run through AI applications like Comfy UI.

  • Why is it important to have the 3D model in a T-pose when importing it into Blender?

    -A T-pose is a standard starting position for 3D models that allows for easier rigging and animation. It ensures that the model is symmetrical and that the limbs are spread out, which is crucial for proper animation and rigging within Blender.

  • How can one improve the quality of the 3D model's head generated by AI?

    -To improve the quality of the 3D model's head, one can use a separate AI-generated head model that focuses on the head alone, which typically results in a cleaner and more detailed appearance. This head can then be attached to the body of the model.

  • What is the purpose of using an empty object in Blender when animating?

    -An empty object in Blender is used as a parent object to control the movement and positioning of other objects in the scene. It allows for easier manipulation of complex animations without directly editing the keyframes of the objects themselves.

  • How does the process of mirroring a 3D model in Blender help in creating a symmetrical model?

    -By deleting one half of the model and using the mirror modifier, one can create a symmetrical model where both sides are identical. This ensures that the model's geometry is consistent and balanced.

  • What is the benefit of using an alpha mask in post-processing the animation?

    -An alpha mask allows for the creation of a transparent background in the animation, which can be useful for compositing the animation into different backgrounds or for use in platforms that require a transparent background.

  • How does the use of IP adapters in AI applications affect the final render?

    -IP adapters help to push the AI towards a specific style or detail in the render. They can be used to focus the AI's attention on particular elements of the scene, resulting in a more controlled and desired outcome.

  • What is the workflow for creating an animation using Blender, Mixo, and AI applications like Comfy UI?

    -The workflow involves creating or selecting a 3D model, rigging and animating it in Blender, choosing or generating animations in Mixo, and then using an AI application like Comfy UI to refine the animation, add details, and control the final render.



😀 Introduction and Welcoming Guests

The host kicks off the stream with excitement, welcoming viewers to a special Creator stream featuring a guest who specializes in 3D and AI. The guest, known as 'e', is introduced as a significant influence on the host's journey into AI video creation. The host encourages audience interaction and questions throughout the session.


📚 Background and Introduction to AI in Video Creation

The guest, e, provides a background on his experience with AI and video creation, mentioning his start with disco diffusion and his diverse skillset in videography, motion graphics, and animation. He discusses the benefits of using 3D videos in AI, such as better control over camera angles and the ability to avoid complex masking in post-production.


🎨 Using AI for 3D Model Generation and Enhancement

The conversation shifts to using AI for generating and enhancing 3D models. e introduces Genie, a text-to-3D generator from Luma Labs, and demonstrates how to use it to create a model of Ryu from Street Fighter. The process involves selecting the best-looking model and using AI to fill in details, showcasing the potential for social commentary through AI-generated content.


🕺 Finding and Using Animations for 3D Models

The discussion moves to sourcing animations for 3D models. e mentions Mixo as a platform for free animations but warns of their overuse due to popularity. He explores alternatives like Deep Motion, which can animate videos using AI. The process of attaching animations to 3D models in Blender is outlined, emphasizing the importance of starting simple and gradually increasing complexity.


🧩 Assembling 3D Models and Preparing for Animation

The host and guest dive into the technical process of assembling 3D models in Blender, including downloading and importing models, adjusting their positioning, and ensuring they are symmetrical. They discuss the importance of rigging and the use of Blender's mirror modifier to create consistent models for animation.


🖌️ Customizing 3D Models and Materials

The focus is on customizing 3D models by changing materials and colors. The host demonstrates how to adjust the color of the hands on a model and addresses a question about the necessity of using Blender. They also discuss the importance of having control over the final product and the role of Blender in the animation pipeline.


🚀 Combining Animations and Preparing for Rendering

The guest shows how to combine different animations, such as a running cycle and a jump, using Blender's nonlinear animation tools. They discuss the process of preparing the animation for rendering, including setting up the camera view, adjusting the resolution, and ensuring the animation loops seamlessly.


🎥 Finalizing the Animation and Exporting

The final steps are taken to finalize the animation, including adding a jump action and ensuring the character lands on a skateboard. The host and guest troubleshoot issues with the animation's movement and discuss the process of exporting the animation as an FFmpeg video, suitable for use in AI upscaling tools.


📈 Upscaling and Enhancing the Animation with AI

The host demonstrates using an AI tool to upscale and enhance the exported animation. They discuss the use of different models and checkpoints to improve the animation's quality, adding vibrance and detail. The guest emphasizes the importance of learning Blender for greater control over the creative process.


🌟 Wrapping Up and Encouraging Exploration

The session concludes with the host and guest expressing gratitude to the viewers, encouraging them to experiment with the tools and techniques discussed. They highlight the potential for creating detailed and stylized animations using Blender and AI, and the guest promotes his social media channels for further insights and tutorials.


📺 Scheduling and Upcoming Streams

The host provides information about upcoming streams, including a schedule for the week and a teaser for the next guest Creator stream featuring Mid Journeyman and Super Beast AI. They discuss the content of future streams and thank the audience for their participation before signing off.


📝 Closing Remarks

The final paragraph is a repetition of 'e', which seems to be a placeholder or error in the transcript. It does not contain any meaningful content related to the video script.



💡Blender 3D

Blender 3D is a free and open-source 3D creation suite that supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing, and motion tracking. In the video, it is used to create and manipulate 3D models and animations, which are then combined with AI-generated content to produce a final video product.


AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In this context, AI is used to generate 3D models and animations, such as the character Miles Morales and the skateboard, which are then integrated into the Blender 3D environment.

💡Stable Diffusion

Stable Diffusion is a term that likely refers to a type of AI model used for generating images or animations from textual descriptions. It is mentioned in the context of the video as a tool that the creators use to generate visual content before importing it into Blender for further manipulation and animation.

💡Luma Labs

Luma Labs is mentioned in the video as the source of a 'text to 3D generator' called Genie, which is used to create 3D models from textual prompts. This tool is part of the process of generating content that is then used within the Blender 3D software for further development.


Mixo is a platform or tool that provides animations which can be used in conjunction with 3D models. In the video, Mixo is used to obtain animations for the 3D character, which are then combined with the models in Blender to create a more dynamic final product.


Twitch is a live streaming platform often used for gaming, art, and other creative content. In the script, Twitch is the platform where the guest stream is taking place, and it serves as the interactive environment where the host and guest demonstrate and discuss the process of combining Blender 3D with AI.


Rotoscoping is an animation technique where animators trace over motion picture frames to create the illusion of smooth movement. In the video, the term is used to describe a process that is being avoided by using 3D animations and AI to create smooth motion without the need for manual frame-by-frame animation.

💡Miles Morales

Miles Morales is a character from the Spider-Man comic series, used in the video as an example of a 3D model generated through AI and then manipulated in Blender. The character serves as a subject for the animation and demonstrates how AI can be used to create recognizable figures for creative projects.

💡Comfy UI

Comfy UI, mentioned in the script, likely refers to a user interface for an AI tool that simplifies the process of generating and editing animations or images. It is used in the video to enhance the 3D animations created in Blender, adding details and improving the visual quality.

💡Deep Motion

Deep Motion is a tool or service mentioned in the video that allows users to animate videos using AI. It is presented as an alternative or additional tool to Mixo, offering another way to generate animations for 3D models, which can then be further processed in Blender.

💡IP Adapter

IP Adapter is a term used in the context of AI and machine learning workflows. It likely refers to a tool or technique that helps adapt or train AI models to better understand and process specific types of content, such as particular styles or elements within an animation or image.


Enigmatic E, a special guest, shares his expertise on combining 3D and AI in a live stream.

The stream covers the process of creating AI videos using Blender 3D, showcasing a full-circle moment for the host.

Enigmatic E has a background in videography, motion graphics, pixel art, and animation, influencing his interest in mixing 3D with AI.

The benefit of using 3D videos in AI includes the ability to control the camera and avoid complex masking or rotoscoping.

Luma Labs' Genie, a text-to-3D generator, is introduced as a free tool for creating 3D models.

Mixo is mentioned as a platform to source animations, with the possibility of using AI to create custom animations.

Deep Motion is highlighted as a tool that animates video inserts, offering another option for creating animations.

The process of mirroring 3D models in Blender for symmetry is demonstrated, which is crucial for rigging and animation.

Blender's capabilities for texturing and material editing are shown to prepare models for AI upscaling.

The importance of starting with a T-pose for the character model to ensure proper rigging in Mixo is emphasized.

Mixo's animation tools are used to rig and animate the character, with the process simplified through AI assistance.

Combining multiple animations, such as running and jumping, is explored to create more complex and dynamic sequences.

The use of Blender's compositor to create clean masks for AI processing is demonstrated, enhancing the final render.

An AI model called 'Aura' is used to upscale and enhance the animation, adding vibrance and detail.

The final animation is rendered and prepared for AI processing, showcasing the potential of Blender and AI collaboration.

The stream concludes with a discussion on the creative possibilities opened up by learning Blender for AI video creation.

Enigmatic E encourages viewers to experiment with the tools and techniques shared, and to share their creations.

The live demonstration, despite its challenges, results in a unique and engaging 3D animation, highlighting the potential of AI in creative workflows.