Civitai AI Video & Animation // Motion Brush Img2Vid Workflow! w/ Tyler

18 Apr 202464:46

TLDRIn this engaging live stream, Tyler from Civitai AI Video & Animation shares an exciting workflow for animating images using a motion brush in Comfy UI with anime diff. The process involves selecting specific parts of an image to animate, such as eyes, hair, or clothing, and then applying various motion layers to bring these elements to life. Tyler demonstrates the workflow with several images sent in by the Discord community, showing how to enhance them with anime-style animations. He also discusses the importance of selecting the right motion layers and adjusting settings for the best results. The workflow is credited to VK, who is thanked for allowing its use in the stream. Tyler emphasizes the low VRAM requirements of the workflow, making it accessible for users with lower-end graphics cards. The stream concludes with a preview of upcoming guest creator streams, including a conversation with Noah Miller, a pioneer in AI animation.


  • 🎨 The stream is about using a motion brush in a specific UI to animate parts of images, showcased by Tyler.
  • 🖌️ The workflow involves using the motion brush with anime diff and comfy UI to bring images to life.
  • 👥 Tyler encourages Discord users to share images for potential animation.
  • 🌟 The stream demonstrates an example of an image transformed using the workflow, with specific parts animated.
  • 📸 The starting image is low resolution, but upscaling later improves the output quality.
  • 🔄 The workflow is low VRAM friendly, making it accessible for users with lower VRAM cards.
  • 🎥 Tyler gives credit to VK, the creator of the workflow, and promotes his Instagram handle.
  • 🔗 The workflow will be uploaded to Tyler's profile after the stream for easy access.
  • 🎞️ The stream includes a variety of test animations using different images and motion layers.
  • 💻 Tyler shares his experience with different models and their impact on the final animation.
  • 📅 The stream concludes with Tyler planning to upload the workflow and previous VODs to YouTube for future reference.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is a workflow demonstration for animating images using a motion brush in a specific software environment, focusing on anime-style animations.

  • Who is the presenter of the video?

    -The presenter of the video is Tyler, who is hosting a stream on Civitai AI Video & Animation.

  • What is the purpose of the motion brush in the workflow?

    -The motion brush is used to select specific parts of an image that the user wants to animate, bringing those parts to life in the final video output.

  • What is the significance of the IP adapter and clip Vision model in the workflow?

    -The IP adapter and clip Vision model are essential components of the workflow as they are the standard models used for image processing and animation generation in the stream.

  • How does the motion scale of anime diff affect the animation?

    -The motion scale of anime diff controls the intensity of the motion in the animation. Values below 1.5 may result in underwhelming motion, while values above 1.5 can introduce artifacts that are usually cleaned up during upscaling.

  • What is the role of the control net in the workflow?

    -The control net, specifically the 'control GIF animate diff' used in the workflow, helps in smoothing out animations and adjusting saturation levels for a more refined output.

  • Why is the Every Journey LCM model preferred for anime-style animations?

    -The Every Journey LCM model is preferred for anime-style animations because it does a great job with animate diff, maintaining the cartoonish look and enabling the running of animations in the anime style.

  • What is the importance of the frame count box in the workflow?

    -The frame count box determines the number of frames the animation will generate, allowing users to control the length of the final animated output.

  • How does the 'grow mask with blur' node function in the workflow?

    -The 'grow mask with blur' node expands the mask outside of the area painted by the user and blurs it, creating a smooth fall off that prevents the motion from appearing too sharp or fragmented.

  • What is the impact of using different motion layers in the workflow?

    -Different motion layers, which are trained on various video footage, can significantly change the motion, saturation, and sharpness of the final animation, providing a variety of effects based on the user's preference.

  • What is the recommended way to upscale the animation for better quality?

    -To upscale the animation for better quality, users are advised to right-click on the output node and bypass the upscaling options to manually adjust the upscaling settings.

  • What is the significance of the VK motion brush workflow?

    -The VK motion brush workflow is a low VRAM friendly method created by VK, which allows for the creation of anime-style animations from still images. It's significant because it enables users with lower VRAM cards to engage in anime diff animations comfortably.



🎨 Introduction to the AI Video and Animation Stream

Tyler, the host, welcomes viewers to the AI video and animation stream, expressing excitement for the day's content. He explains that they will be using a motion brush in Comfy UI with an anime diffusion model to animate specific parts of images. Tyler invites viewers from Discord to submit images for animation and shares his recent experience with Spencer's guest stream, which focused on audio reactive content.


📝 Setting Up the Workflow with IP Adapter and Clip Vision Models

Tyler details the workflow setup, emphasizing the importance of having the correct Clip Vision and IP adapter models. He mentions using the latest version of the IP adapter and shares his settings for a cleaner input. He also introduces the Laura loader and ControlNet for smoother animations and discusses the use of two different checkpoints for anime-style animations.


🖌️ Painting Key Animation Parts Using the Mask Editor

The host demonstrates how to use the mask editor to paint key parts of the image for animation. He highlights the process of painting the eyes, eyelids, and other features to enhance the animation effect. Tyler also discusses the decision to switch from the popular Hello 2D model to maintain variety in the animations.


🎭 Adjusting Motion Intensity and Mask Settings

Tyler shows how to adjust the motion intensity and mask settings to control the animation effect. He introduces nodes like 'grow mask with blur' to create a smooth fall off in motion and discusses the possibility of inverting the mask for different effects. The host also talks about the impact of motion layers and the importance of selecting the right one.


🖥️ Low VRAM Workflow Efficiency and Interpolation

The host discusses the workflow's efficiency, particularly its low VRAM usage, which is beneficial for users with lower-end graphics cards. He demonstrates the difference between interpolating frames and not, showing the impact on VRAM usage and animation smoothness.


🔄 Iterating Animations with Different Motion Layers

Tyler iterates through different animations, using various motion layers to achieve different effects. He emphasizes the importance of experimentation and trying different combinations to achieve the desired animation results.


🌊 Painting and Animating a Character with an Ocean Background

The host decides to paint and animate an image of a girl with an ocean background. He chooses specific areas to paint, such as the inside of the head and the ocean, to drive the motion in the animation. Tyler uses descriptive prompts to guide the AI in generating the animation.


🔧 Finalizing the Workflow and Preparing for Upload

Tyler finalizes the workflow by removing unnecessary elements and preparing the JSON file for upload. He discusses the process of exporting the workflow, compressing it into a zip file, and tagging it with relevant keywords for easier discovery.


📚 Sharing Resources and Upcoming Guest Creator Streams

The host shares resources, including the workflow page and VK's Instagram link, and encourages viewers to use specific hashtags on Instagram for feature opportunities. He also teases upcoming guest creator streams, including one with Noah Miller, discussing AI animation and the evolution of the field.


🎉 Wrapping Up the Stream and Expressing Gratitude

Tyler concludes the stream by thanking viewers for their participation, highlighting the importance of community sharing in the open-source environment. He reminds viewers of the next stream's schedule and expresses enthusiasm for the upcoming month of guest creators.



💡Motion Brush

Motion Brush is a tool used within the video for animating specific parts of images. It is central to the workflow shared by Tyler, allowing users to bring selected areas of their images to life with movement. In the script, it is used in conjunction with the 'anime diff' to animate images, such as making the character's eyes blink or hair blow in the wind.

💡AI Video and Animation

AI Video and Animation refers to the use of artificial intelligence to create video and animated content. In the context of the video, Tyler is using AI tools to animate images, which is the main theme of the stream. The script discusses how these AI tools can transform static images into dynamic animations.


A workflow in this context is a sequence of steps used to complete a task or process. Tyler shares a specific workflow for animating images using AI. The script outlines the steps involved in this workflow, from preparing the image to the final animation output.

💡Anime Diff

Anime Diff is a term used to describe the process of applying AI to create animations in the style of anime. It is a key part of the workflow that Tyler is demonstrating. The script mentions using 'anime diff' to generate animations that have a cartoonish look, which is an important aspect of the video's content.


Upscaling refers to the process of increasing the resolution of an image or video. In the script, Tyler mentions that the AI workflow includes an upscaling step to clean up artifacts and blurriness from the AI-generated animations, which is crucial for achieving high-quality results.


Video RAM (VRAM) is the memory used by graphics processing units (GPUs) to store image data. Tyler discusses the VRAM requirements for the AI animation workflow, noting that it is 'low VRAM friendly,' making it accessible to users with less powerful graphics cards. This is important for the video's audience, as it indicates the workflow can be used on a range of hardware.

💡Control Net

Control Net is a feature within the AI animation software that helps smooth out animations and adjust saturation. Tyler mentions using a specific Control Net called 'control GIF animate diff' to improve the quality of the animations. It's a critical component in the workflow for achieving more natural and less fragmented motion in the animations.


In the context of the video, a checkpoint refers to a specific state or model within the AI software that is used for a particular style of animation. Tyler discusses using different checkpoints, such as 'Boton LCM' and 'Every Journey LCM,' to achieve different animation effects, which is a key part of the customization process in the workflow.


Interpolation is a method used to create intermediate frames between existing frames to increase the smoothness of animations. Tyler talks about using an interpolation node set to 30 frames per second to smooth out the animations, which is an important step for achieving a higher quality output.

💡Mask Editor

A Mask Editor is a tool used to select and isolate specific parts of an image for editing or animation. In the script, Tyler uses the mask editor to paint over parts of the image that will be animated, such as the character's eyes or hair, which is a fundamental step in the workflow for directing where the animation should occur.


VK is the username of the creator of the workflow that Tyler is sharing. VK is credited with creating the workflow and giving permission for Tyler to share it with the audience. VK's contribution is significant as it represents the collaborative and sharing nature of the AI animation community discussed in the video.


Tyler shares a new workflow for animating specific parts of images using a motion brush in Comfy UI with anime diff.

The workflow was created by VK and Tyler has received permission to share it with the community.

The process involves painting key parts of an image to animate them, resulting in a more dynamic final product.

The motion scale of anime diff can be adjusted to control the intensity of the animation.

Different motion layers can be used to achieve various animation effects, such as liquid or fire.

The workflow is low VRAM friendly, making it accessible for users with lower-end graphics cards.

Tyler demonstrates the animation of a still image, bringing clouds, hair, and reflective parts to life.

The use of a control net, specifically the control GIF animate diff, helps smooth out animations.

The IP adapter advanced node is utilized for cleaner inputs in the workflow.

The process can be finicky and requires iteration to find the right motion layer for the desired effect.

Tyler emphasizes the importance of experimenting with different elements to enhance the video creation process.

The final output can be significantly improved by upscaling, which cleans up artifacts and blurriness.

The workflow is showcased through various images, including a dripping head, a girl in front of a flaming house, and a spaghetti-eating character.

The motion brush technique allows for the animation of detailed elements like eyes blinking and hair moving.

Tyler discusses the use of different checkpoints for anime-based animations, such as Every Journey LCM.

The process is demonstrated live, allowing viewers to see the step-by-step animation creation in real-time.

The workflow will be made available on Tyler's Civitai profile after the stream for others to use and experiment with.

Tyler encourages viewers to share their creations on Instagram using a specific hashtag for potential feature and reshare.