Wild AI Video Workflow with Viggle, Kaiber, Leonardo, Midjourney, Gen-2, and MORE!

Theoretically Media
2 Apr 202411:58

TLDRIn this video, the creator shares an innovative AI filmmaking workflow that covers the entire process from pre-production to generating short films. The workflow is inspired by Gareth Edwards' 2016 film 'Rogue One' and the technique of creating a feature-length story reel using clips from various movies. The video demonstrates how to use AI tools like Viggle, Midjourney, Leonardo, and Gen-2 to create a hybrid storyboard animatic animation. The process involves clipping reference footage, using AI to generate characters and backgrounds, and refining the output with motion features and video editing techniques. The result is a promising method for filmmakers, offering a more efficient and creative approach to pre-production and short film creation.

Takeaways

  • 🎬 The video discusses an innovative AI filmmaking workflow that covers pre-production to generating short films.
  • 🚀 The workflow is inspired by Gareth Edwards' film 'Rogue One' and the technique of creating a feature-length story reel from various movie clips.
  • 🌟 The process involves using AI tools like Viggle, Midjourney, Leonardo, and Gen-2 to create a hybrid storyboard, animatic, and animation.
  • 📽 The first step is to clip reference footage and use Viggle 2.0 to generate a character model dressed in a specific style.
  • 🧍‍♂️ Midjourney is used to create the main character's model, emphasizing a full-body image in a 9:16 format.
  • 💻 Viggle's command 'for/miix' is used for combining the character shot with a video source, with options for background and fine-tuning.
  • 🔍 Leonardo is utilized to refine the character's image, especially when dealing with complex poses or actions.
  • 🤖 Gen-2 is employed to add movement to the background, creating a dynamic scene that complements the character.
  • 🎨 Kyber is used to stylize and unify the character and background, with its new motion 3.0 feature for smoother transitions.
  • 🖼️ The final step involves compositing the character and background in a video editor, using techniques like chroma key removal and color correction.
  • 🎶 Audio is generated using sites like Audiogen for crowd chanting and Typcast for dialogue, adding to the cinematic experience.
  • 🔧 While not perfect, the method is considered useful for pre-production and more effective than simply compiling movie clips.

Q & A

  • What is the inspiration behind the AI film making workflow shared in the video?

    -The inspiration comes from the 2016 film 'Rogue One', particularly from an interview with editor Colin Goudie, who discussed creating a feature-length story reel using clips from other movies to determine dialogue needs.

  • What AI tools are used in the described workflow?

    -The workflow utilizes AI tools such as Viggle, Midjourney, Leonardo, Gen-2, and Kyber, among others, for tasks ranging from image generation to video editing and enhancement.

  • How does the AI tool Viggle contribute to the workflow?

    -Viggle is used to create a hybrid storyboard animatic animation by taking reference footage and generating dance moves or other actions for the characters.

  • What is the significance of using a green screen background in the workflow?

    -A green screen background allows for easier chroma key removal in video editing, which is crucial for compositing the AI-generated character into the desired background.

  • How does the AI tool Kyber help in stabilizing the generated footage?

    -Kyber's new motion 3.0 feature helps to stabilize shaky footage by providing a more consistent and less 'warp-y' look compared to previous versions.

  • What is the role of Gen-2 in creating the background for the film?

    -Gen-2 is used to add movement and life to the static background by applying a simple command to move elements within the scene.

  • How does the video editor, such as Adobe Premiere, contribute to the final composition?

    -Adobe Premiere is used to layer the character and background, apply chroma key removal, and make adjustments such as blurring and color correction to integrate the character seamlessly into the scene.

  • What audio tools were used to create the crowd chanting and dialogue?

    -Audiogen was used to generate crowd chanting, and Typcast with the Frankenstein model was used to create the dialogue for the character.

  • What challenges were faced when using text-to-speech tools for the dialogue?

    -The speaker encountered difficulties with text-to-speech tools, such as poor results from direct speech-to-speech conversions and inconsistencies in the quality of the generated dialogue.

  • What is the final step in the workflow to enhance the cinematic feel of the film?

    -The final step is to add black bars at the top and bottom of the film to create a faux letterbox effect, which contributes to a more cinematic look.

  • What is the speaker's opinion on the potential of this workflow for feature films?

    -The speaker believes that while the method is not perfect and debatable for a full feature film, it works well for short films and can be more useful and productive for pre-production on large-scale movies or for indie filmmakers.

  • What does the speaker suggest for those interested in learning more about the workflow?

    -The speaker invites viewers to like and subscribe for more workflow videos that will be coming up soon, providing further insights and clarifications.

Outlines

00:00

🎬 AI Filmmaking Workflow Introduction

The speaker introduces an AI filmmaking workflow that has potential from pre-production to generating short films. Inspired by Gareth Edwards' 2016 film Rogue One, the speaker aims to share their experiences and learnings to potentially save time for others interested in trying this workflow. The speaker mentions the use of various AI tools and their intention to provide a comprehensive overview of the process, including what works and what doesn't.

05:00

🌟 Utilizing AI for Character Creation and Scene Development

The speaker discusses the process of using AI tools for character creation and scene development. They mention the use of Vigle 2.0 for dancing animations and the creation of a model for the main character. The speaker also talks about the challenges they faced, such as issues with camera movement and the need for a 9:16 format for full-body images. They describe the process of refining the AI-generated content by using additional AI tools like Leonardo and Midjourney to improve the results.

10:00

🎞️ Enhancing AI Video Output with Kyber and Background Composition

The speaker explains how they enhanced the AI video output by using Kyber's new motion 3.0 feature. They discuss the process of bringing the Vigle output into Kyber and using prompts to create a consistent character look. The speaker also talks about the importance of background composition, using Gen 2 to add movement and life to the scene. They describe the process of combining the character and background in a video editor, using techniques like chroma keying and color correction to achieve a cohesive final output.

🎧 Audio Generation and Final Filmmaking Touches

The speaker addresses the challenges of generating dialogue and background audio for the AI-filmed scene. They share their experience with text-to-speech tools and ultimately settling on a free source called typcast, using the Frankenstein model for voice generation. For the soundtrack, the speaker chose to create a quick 20-second cue in Ableton, using loops to add a cinematic touch. They conclude by reflecting on the overall method, its potential for short films and pre-production, and tease upcoming workflow videos for those interested in this AI filmmaking technique.

Mindmap

Keywords

💡AI film making workflow

AI film making workflow refers to the process of creating films using artificial intelligence tools and techniques. In the video, this workflow is used from pre-production to generating short films, showcasing the potential of AI in the film industry. It is a hybrid approach, combining various AI tools to create a storyboard, animatic, and animation.

💡Viggle

Viggle is an AI tool mentioned in the video for editing and enhancing video footage. It recently released a 2.0 update, which is used to clip out reference footage and create dance moves, among other features. In the context of the video, Viggle is used to create a character's dance sequence.

💡Midjourney

Midjourney is an image generator used to create models for characters in the video. It is important for generating a full-body image in a 9916 format, which is then used as a reference for further editing and animation. The video uses Midjourney to create a character dressed similarly to Shan Connery from the film 'Zardoz'.

💡Green screen

A green screen is a technology used in film production where a subject is filmed in front of a solid green background, which is later replaced with other images or footage during post-production. In the video, the green screen is used to isolate the main character for easier editing and compositing into different backgrounds.

💡Leonardo

Leonardo is an AI tool used for image-to-image translation, which is utilized in the video to refine and enhance the character's image by using a screenshot as a reference. It helps in creating a more polished and consistent look for the character in the final film.

💡Kyber

Kyber is an AI video generator with a new motion 3.0 feature that is used to clean up and stabilize the footage generated by Viggle. It is praised for its unique capabilities in creating a cohesive look and reducing shakiness in the generated video sequences.

💡Gen 2

Gen 2 is an AI tool used to add movement and life to the background of the film. It is used to create a dynamic backdrop that moves to the right, giving a sense of depth and realism to the scene. The video uses Gen 2 to enhance the background with a soft depth of field effect.

💡Chroma key remover

A chroma key remover, such as Adobe Premiere's Ultra Key, is a tool used to remove a specific color (usually green or blue) from the video footage, allowing for the replacement of the background with another image or video. In the video, it is used to composite the character onto the background.

💡Audio generation

Audio generation refers to the process of creating or synthesizing audio content, such as background sounds or dialogue. In the video, a free site called Audiogen is used to generate crowd chanting for an arena atmosphere, adding to the realism of the scene.

💡Text-to-speech

Text-to-speech (TTS) is a technology that converts written text into spoken words. The video discusses using TTS to generate dialogue for the film, with the example of using a clip from Russell Crowe's speech in the film 'Gladiator'. However, the results were not satisfactory, leading to the use of an alternate source called Typcast.

💡Ableton

Ableton is a digital audio workstation used for music production. In the video, it is used to create a quick 20-second cue for the film's soundtrack by using loops. This demonstrates the integration of audio production with the AI film making process.

Highlights

AI film making workflow is shared, covering pre-production to generating short films using a hybrid approach.

The workflow is inspired by the 2016 film Rogue One and its innovative editing process.

AI tools are used to create a storyboard animatic animation, taking the concept further than traditional methods.

Viggle 2.0 update is utilized for its improved features in the workflow.

Midjourney is used to create a model for the main character, emphasizing the importance of format and detail.

Vigle's command 'for/miix' is used for image and video source integration with background and fine-tuning options.

Vigle's limitations with camera movement are discussed, noting the preference for locked-down shots.

Leonardo is used to address issues with character generation, such as invisible forearms or cropped heads.

Kyber's new motion 3.0 feature is highlighted for its unique AI video generation capabilities.

Naming an actor in Kyber influences the character's face consistency without necessarily resembling the actor.

Gen 2 is used to add movement and life to the background, creating a dynamic scene.

Kyber is used to stylize both character and background for a unified and cohesive look.

Adobe Premiere is used for video editing, with tips on using chroma key and other effects to integrate character and background.

Audio generation for crowd chanting is done using the free site Audiogen.

11 Labs and Typcast are compared for dialogue generation, with Typcast's Frankenstein model being favored.

Ableton is used to create a quick 20-second cue for the soundtrack.

The workflow is not perfect but is considered watchable for short films and useful for pre-production.

The presenter anticipates incorporating more tools into the workflow and has more videos planned on the subject.