Wild AI Video Workflow with Viggle, Kaiber, Leonardo, Midjourney, Gen-2, and MORE!
TLDRIn this video, the creator shares an innovative AI filmmaking workflow that covers the entire process from pre-production to generating short films. The workflow is inspired by Gareth Edwards' 2016 film 'Rogue One' and the technique of creating a feature-length story reel using clips from various movies. The video demonstrates how to use AI tools like Viggle, Midjourney, Leonardo, and Gen-2 to create a hybrid storyboard animatic animation. The process involves clipping reference footage, using AI to generate characters and backgrounds, and refining the output with motion features and video editing techniques. The result is a promising method for filmmakers, offering a more efficient and creative approach to pre-production and short film creation.
Takeaways
- π¬ The video discusses an innovative AI filmmaking workflow that covers pre-production to generating short films.
- π The workflow is inspired by Gareth Edwards' film 'Rogue One' and the technique of creating a feature-length story reel from various movie clips.
- π The process involves using AI tools like Viggle, Midjourney, Leonardo, and Gen-2 to create a hybrid storyboard, animatic, and animation.
- π½ The first step is to clip reference footage and use Viggle 2.0 to generate a character model dressed in a specific style.
- π§ββοΈ Midjourney is used to create the main character's model, emphasizing a full-body image in a 9:16 format.
- π» Viggle's command 'for/miix' is used for combining the character shot with a video source, with options for background and fine-tuning.
- π Leonardo is utilized to refine the character's image, especially when dealing with complex poses or actions.
- π€ Gen-2 is employed to add movement to the background, creating a dynamic scene that complements the character.
- π¨ Kyber is used to stylize and unify the character and background, with its new motion 3.0 feature for smoother transitions.
- πΌοΈ The final step involves compositing the character and background in a video editor, using techniques like chroma key removal and color correction.
- πΆ Audio is generated using sites like Audiogen for crowd chanting and Typcast for dialogue, adding to the cinematic experience.
- π§ While not perfect, the method is considered useful for pre-production and more effective than simply compiling movie clips.
Q & A
What is the inspiration behind the AI film making workflow shared in the video?
-The inspiration comes from the 2016 film 'Rogue One', particularly from an interview with editor Colin Goudie, who discussed creating a feature-length story reel using clips from other movies to determine dialogue needs.
What AI tools are used in the described workflow?
-The workflow utilizes AI tools such as Viggle, Midjourney, Leonardo, Gen-2, and Kyber, among others, for tasks ranging from image generation to video editing and enhancement.
How does the AI tool Viggle contribute to the workflow?
-Viggle is used to create a hybrid storyboard animatic animation by taking reference footage and generating dance moves or other actions for the characters.
What is the significance of using a green screen background in the workflow?
-A green screen background allows for easier chroma key removal in video editing, which is crucial for compositing the AI-generated character into the desired background.
How does the AI tool Kyber help in stabilizing the generated footage?
-Kyber's new motion 3.0 feature helps to stabilize shaky footage by providing a more consistent and less 'warp-y' look compared to previous versions.
What is the role of Gen-2 in creating the background for the film?
-Gen-2 is used to add movement and life to the static background by applying a simple command to move elements within the scene.
How does the video editor, such as Adobe Premiere, contribute to the final composition?
-Adobe Premiere is used to layer the character and background, apply chroma key removal, and make adjustments such as blurring and color correction to integrate the character seamlessly into the scene.
What audio tools were used to create the crowd chanting and dialogue?
-Audiogen was used to generate crowd chanting, and Typcast with the Frankenstein model was used to create the dialogue for the character.
What challenges were faced when using text-to-speech tools for the dialogue?
-The speaker encountered difficulties with text-to-speech tools, such as poor results from direct speech-to-speech conversions and inconsistencies in the quality of the generated dialogue.
What is the final step in the workflow to enhance the cinematic feel of the film?
-The final step is to add black bars at the top and bottom of the film to create a faux letterbox effect, which contributes to a more cinematic look.
What is the speaker's opinion on the potential of this workflow for feature films?
-The speaker believes that while the method is not perfect and debatable for a full feature film, it works well for short films and can be more useful and productive for pre-production on large-scale movies or for indie filmmakers.
What does the speaker suggest for those interested in learning more about the workflow?
-The speaker invites viewers to like and subscribe for more workflow videos that will be coming up soon, providing further insights and clarifications.
Outlines
π¬ AI Filmmaking Workflow Introduction
The speaker introduces an AI filmmaking workflow that has potential from pre-production to generating short films. Inspired by Gareth Edwards' 2016 film Rogue One, the speaker aims to share their experiences and learnings to potentially save time for others interested in trying this workflow. The speaker mentions the use of various AI tools and their intention to provide a comprehensive overview of the process, including what works and what doesn't.
π Utilizing AI for Character Creation and Scene Development
The speaker discusses the process of using AI tools for character creation and scene development. They mention the use of Vigle 2.0 for dancing animations and the creation of a model for the main character. The speaker also talks about the challenges they faced, such as issues with camera movement and the need for a 9:16 format for full-body images. They describe the process of refining the AI-generated content by using additional AI tools like Leonardo and Midjourney to improve the results.
ποΈ Enhancing AI Video Output with Kyber and Background Composition
The speaker explains how they enhanced the AI video output by using Kyber's new motion 3.0 feature. They discuss the process of bringing the Vigle output into Kyber and using prompts to create a consistent character look. The speaker also talks about the importance of background composition, using Gen 2 to add movement and life to the scene. They describe the process of combining the character and background in a video editor, using techniques like chroma keying and color correction to achieve a cohesive final output.
π§ Audio Generation and Final Filmmaking Touches
The speaker addresses the challenges of generating dialogue and background audio for the AI-filmed scene. They share their experience with text-to-speech tools and ultimately settling on a free source called typcast, using the Frankenstein model for voice generation. For the soundtrack, the speaker chose to create a quick 20-second cue in Ableton, using loops to add a cinematic touch. They conclude by reflecting on the overall method, its potential for short films and pre-production, and tease upcoming workflow videos for those interested in this AI filmmaking technique.
Mindmap
Keywords
AI film making workflow
Viggle
Midjourney
Green screen
Leonardo
Kyber
Gen 2
Chroma key remover
Audio generation
Text-to-speech
Ableton
Highlights
AI film making workflow is shared, covering pre-production to generating short films using a hybrid approach.
The workflow is inspired by the 2016 film Rogue One and its innovative editing process.
AI tools are used to create a storyboard animatic animation, taking the concept further than traditional methods.
Viggle 2.0 update is utilized for its improved features in the workflow.
Midjourney is used to create a model for the main character, emphasizing the importance of format and detail.
Vigle's command 'for/miix' is used for image and video source integration with background and fine-tuning options.
Vigle's limitations with camera movement are discussed, noting the preference for locked-down shots.
Leonardo is used to address issues with character generation, such as invisible forearms or cropped heads.
Kyber's new motion 3.0 feature is highlighted for its unique AI video generation capabilities.
Naming an actor in Kyber influences the character's face consistency without necessarily resembling the actor.
Gen 2 is used to add movement and life to the background, creating a dynamic scene.
Kyber is used to stylize both character and background for a unified and cohesive look.
Adobe Premiere is used for video editing, with tips on using chroma key and other effects to integrate character and background.
Audio generation for crowd chanting is done using the free site Audiogen.
11 Labs and Typcast are compared for dialogue generation, with Typcast's Frankenstein model being favored.
Ableton is used to create a quick 20-second cue for the soundtrack.
The workflow is not perfect but is considered watchable for short films and useful for pre-production.
The presenter anticipates incorporating more tools into the workflow and has more videos planned on the subject.