Create high-quality deepfake videos with Stable Diffusion (Mov2Mov & ReActor)

AI Lab Tutorial
14 Jan 202406:49

TLDRUTA Akiyama demonstrates how to create high-quality deepfake videos using Stable Diffusion with the expansion functions Move2Move and ReActor. After downloading and installing these functions, Akiyama guides viewers through the process of converting a video into a series of images and then reassembling it with AI-generated visuals. The tutorial covers selecting a model, uploading the original video, adjusting settings like sampling method and noising strength, and using ReActor to replace faces without distortion. Akiyama emphasizes the accuracy and naturalness of the final result, encouraging viewers to explore the potential of Stable Diffusion for creating not just videos, but also text-to-image content.


  • 🎬 The video introduces how to create high-quality deepfake videos using Stable Diffusion with the expansion functions called Mov2Mov and ReActor.
  • 🔍 UTA Akiyama, the presenter, previously introduced a face swap technique called Loop but will now demonstrate the improved version using Reactor.
  • 📥 To get started, download the Mov2Mov and Reactor expansion functions by installing them from the provided URLs.
  • 🔄 After installing the functions, restart Stable Diffusion to ensure the installation is successful.
  • 🌟 Mov2Mov is an expansion function that converts each video frame into an image and stitches them together to create a new video.
  • 🖼️ The ReActor is an expansion function for face swapping, which can be installed similarly to Mov2Mov.
  • 🎭 For creating the video, select the 'beautiful realistic' model suitable for generating Asian style visuals.
  • 📁 Upload the original video and choose a sampling method, such as DPM Plus+ 2m, Crow, for the generation process.
  • 📐 Adjust the width and height to match the original video size for consistency.
  • ⚙️ Set the denoising strength to zero when using Reactor to change the face without altering the original video.
  • 🧑‍🤝‍🧑 Use Reactor to upload the new face image and enable gender detection and other features to refine the face swap.
  • ✅ The final deepfake video can be accurately generated without face collapse and downloaded for storage.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is how to create high-quality deepfake videos using Stable Diffusion with the expansion functions Move to Move and Reactor.

  • Who is the presenter of the video?

    -The presenter of the video is UTA Akiyama.

  • What is the first step in creating a deepfake video as described in the video?

    -The first step is to download and install the Move to Move and Reactor expansion functions in Stable Diffusion.

  • How can viewers access the links for downloading the expansion functions?

    -Viewers can access the links for downloading the expansion functions in the summary column of the video.

  • What is the purpose of the Move to Move expansion function?

    -The Move to Move expansion function converts the original video into an image for each frame and creates a new video by connecting these images.

  • What is the role of the Reactor expansion function in the process?

    -The Reactor expansion function is used for face swapping in the video, allowing the user to replace faces with a more natural state.

  • What model does UTA Akiyama use for creating Asian style visuals?

    -UTA Akiyama uses the 'Beautiful Realistic' model for creating Asian style visuals.

  • How does the sampling method 'DPM Plus+ 2m, Crow' affect the video creation process?

    -The sampling method 'DPM Plus+ 2m, Crow' is set as the default and is used to determine the process of generating the new video frames.

  • How does changing the width and height at resize affect the final video?

    -Changing the width and height at resize allows the user to match the size of the new video to the original video, ensuring consistency.

  • What is the significance of the 'denoising strength' setting?

    -The 'denoising strength' setting determines how closely the generated video will resemble the original. A lower value results in a more faithful reproduction.

  • How does the 'Code Fore' feature in Reactor work?

    -The 'Code Fore' feature in Reactor is a restoration model that maintains the structure of the image and cleans up blurry images, particularly useful when the face is blurred.

  • Where can the final processed deepfake video be downloaded from?

    -The final processed deepfake video can be downloaded from the 'Move to Move' tab in Stable Diffusion after processing is complete.



🎥 Introduction to High-Quality Deepfake Video Creation

UTA Akiyama introduces the process of creating high-quality deepfake videos using Stable Diffusion with the expansion functions 'Move to Move' and 'Reactor'. The video covers downloading and installing these functions, launching Stable Diffusion, and using the 'Move to Move' function to convert videos into images and back. The 'Reactor' function is used for face swapping, and the video provides a step-by-step guide on how to use these tools, including setting up prompts, sampling methods, and adjusting video dimensions and noising strength. The process concludes with generating the video and downloading it for storage.


🔧 Customizing Deepfake Video Settings and Reviewing Results

The second paragraph delves into the customization options for deepfake video creation. It discusses selecting the appropriate model for generating realistic visuals, setting the sampling method, and adjusting video dimensions to match the original. The focus then shifts to using the 'Reactor' for face swapping, where the video explains how to upload a face image, enable gender detection, and utilize features like 'Lister face' for natural-looking results. The paragraph concludes with the completion of the video creation process, reviewing the results for accuracy, and providing instructions on how to download the final deepfake video.




Deepfake refers to the use of artificial intelligence to create convincing fake videos or audio recordings of a person, often used to manipulate or deceive. In the video, the main theme revolves around creating high-quality deepfake videos using specific software and techniques, which is central to the content being discussed.

💡Stable Diffusion

Stable Diffusion is a term used in the video to refer to a software or technology that enables the creation of deepfake videos. It is likely a platform or tool that facilitates the process of generating synthetic media, and the video provides a tutorial on how to use it for creating deepfake videos.


Loop is mentioned as a face swap technique that the presenter has previously introduced. It is a method used in the process of creating deepfake videos, where one person's face is swapped with another's. The video suggests that the Loop technique is a precursor to the more advanced methods being discussed.


Reactor is an expansion function used within the Stable Diffusion platform to improve the face swap process. It is a tool that helps in identifying gender and changing faces in the video, making the deepfake appear more natural and realistic.

💡Move to Move

Move to Move is an expansion function that converts the original video into an image for each frame and creates a new video by connecting these images. It is used in the video to illustrate how an AI can generate images for each frame, akin to creating a video by attaching text to images.

💡Sampling Method

The sampling method is a technique used in the video to determine how the AI processes and generates each frame of the deepfake video. DPM Plus+ 2m Crow is mentioned as the default sampling method chosen by the presenter, which likely refers to a specific algorithm or process for generating the synthetic media.

💡Noising Strength

Noising strength is a parameter that controls the level of noise or distortion in the generated video. A lower value results in a more faithful reproduction of the original video, while a higher value can create a more stylized or different look. In the context of the video, it is set to zero to maintain the original video's quality.

💡Seed Value

The seed value is a setting that determines the starting point for the random number generation used in the AI's processing. It can influence the outcome of the generated video. In the video, the presenter chooses not to set a seed value, allowing the AI to generate the video without specific initial conditions.

💡Control Net

Control Net is a feature mentioned in the video that is used to manipulate or guide the AI's image generation process. The presenter decides not to use this feature in the demonstration, suggesting it may be an optional tool for more advanced or specific types of video manipulation.

💡Gender Detection

Gender Detection is a feature within the Reactor tool that identifies the gender of the faces in the video. It is used to ensure that the face swap process aligns with the gender of the original subject, contributing to the authenticity of the deepfake.

💡Restoration Model

A restoration model, such as the one mentioned as 'code forer' in the video, is used to correct and improve the quality of the generated images, particularly when the face appears blurred. It helps to maintain the structural integrity of the image and remove any artifacts or distortions.


UTA akiyama introduces how to create high-quality deepfake videos with Stable Diffusion.

The face swap technique Loop is mentioned, with an emphasis on the improved version called Reactor.

The video also utilizes the expansion function called Mov2Mov for video creation.

Instructions on downloading the expansion functions Mov2Mov and Reactor are provided.

The process of launching Stable Diffusion and navigating to the Extensions tab is detailed.

Mov2Mov is installed first, with a link provided in the summary column for easy access.

After installation, the user is guided to restart Stable Diffusion to complete the setup.

SD Web Reactor, the face swap expansion function, is installed using a similar procedure.

The installation is confirmed by the appearance of the 'Reactor' option under the Mov2Mov tab.

The model 'Beautiful Realistic' is chosen for creating Asian style visuals.

The original video is uploaded for face generation, with the sampling method set to DPM Plus+ 2m.

The width and height are adjusted to match the original video dimensions for consistency.

The denoising strength is set to zero to accurately reproduce the original video.

Reactor is used to change the face in the video without causing face collapse.

The face to be swapped is uploaded as a single source image, also created by Beautiful Realistic.

Gender detection and face restoration features of Reactor are explained.

The 'Code Fore' restoration model is selected for handling blurred faces.

The processing progress can be monitored in Google Collaboration.

The final deepfake video is accurately created with the face replaced without any collapse.

The video can be downloaded from the Stable Diffusion Web UI for further use.

The video concludes with an invitation to like, subscribe, and try generating images with Stable Diffusion.