Generate STUNNING SET EXTENSIONS For Your Projects! [2D & 3D | FREE Blender + AI]
Summary
TLDRThis video introduces two AI-driven workflows for creating seamless set extensions in film projects. The first workflow allows users to add any object to an image, similar to Photoshop’s tools but more advanced and free. The second workflow integrates 3D models into footage, considering lighting, colors, and style for a realistic result. The video provides a detailed step-by-step guide on how to install and use the required software, including setting up COM UI and managing models, and shows how to implement these workflows for both 2D and 3D set extensions in film scenes.
Takeaways
- 😀 Two free workflows are introduced for creating set extensions: one for selecting and adding items to images, and the other for integrating 3D models seamlessly.
- 💻 The first workflow functions similarly to Photoshop's generator field, allowing users to select areas of an image and add desired objects.
- 🌐 The second workflow allows for the input of 3D models, considering light direction, color, and style to blend them into footage.
- 🛠 Users need to install Com UI, a node-based interface for AI models, and follow a step-by-step guide to download models and tools.
- 🔧 The process includes tracking footage using After Effects, DaVinci Resolve, or Blender for camera tracking, followed by applying the set extension.
- 🌍 A demonstration of transforming a city image into a sci-fi scene with an overgrown spaceship ruin is shown as part of the workflow.
- 🎥 Users can generate high-quality image extensions, blending them with original footage by adjusting prompts, masks, and other settings.
- 🏠 A second 3D workflow is demonstrated for adding a 3D house to footage, using render passes in Blender to integrate AI-generated textures and models.
- 📈 The set extension workflows support scaling, depth passes, and line art for maintaining consistency in image resolution and style.
- 🤖 Users can easily swap textures or prompts to generate different variations of objects, such as turning a cozy farmhouse into a post-apocalyptic shed.
Q & A
What are the two workflows introduced in the video?
-The first workflow allows you to select an area of your image and add anything you want, similar to Photoshop's generator field but better and free. The second workflow lets you input a 3D model, and the AI seamlessly integrates it, considering light direction, colors, and the general style of the original footage.
What are some key tools mentioned for tracking footage?
-The video mentions tools like After Effects for camera tracking, DaVinci Resolve for point tracking, and Blender for full 3D tracking.
What is ComfyUI, and why is it important in the workflow?
-ComfyUI is a node-based interface for Stable Diffusion and other AI models. It is crucial because the workflows introduced rely on ComfyUI to generate and integrate AI-powered set extensions or 3D elements seamlessly.
What models are required to run the workflows?
-The models mentioned include the Wild Card Turbo checkpoint, two control net models, and the Ultra model. These are essential for generating images and blending them with the original footage.
How does the AI handle light direction and realism in the image generation process?
-The AI understands the entire image, including elements like sunlight direction, casting correct shadows, and adjusting black values for distant parts of the image. This helps maintain realism and visual coherence.
How do you generate a set extension using a specific frame in ComfyUI?
-First, export the desired frame, then select the area for the set extension in the mask editor. Input a prompt describing the extension (e.g., 'spaceship ruin'), and the AI generates an image that integrates seamlessly into the original footage.
What steps are taken to avoid visible seams between the original footage and the generated image?
-The workflow uses masks to blur the seams, helping to blend the newly generated image with the original. Line art and reference control nets also help by extracting lines and using the original image as a reference for consistency.
How does the 3D set extension process differ from the 2D one?
-For 3D set extensions, you need to track the 3D camera using software like Blender, model a simple geometry for the extension, and export render passes (e.g., depth and line art) that help the AI generate a 3D image, which is later integrated into the scene.
What is the purpose of render passes in the 3D workflow?
-Render passes, such as the depth pass and line art pass, provide the AI with detailed geometry and spatial information. These passes help the AI accurately place and integrate the 3D extension into the original footage.
How can the generated textures be projected onto 3D models in Blender?
-After generating the texture, you can create a projection shader in Blender, UV project the texture onto the 3D model, and match it to the original footage. This ensures that the texture aligns perfectly with the geometry and camera view.
Outlines
🚀 Introducing Two Free Workflows for Set Extensions
In this section, the creator introduces two innovative workflows designed to enhance set extensions in film projects. The first workflow allows users to select areas of an image and add elements, similar to Photoshop's generator field but free and more powerful. The second workflow enables seamless integration of 3D models into footage, taking into account lighting, colors, and style for realistic results. The creator expresses excitement about the workflows' potential and offers a step-by-step guide for using them, thanking viewers and Patreon supporters before diving into a project example of transforming footage from a small town into a bustling cityscape.
🎬 Step-by-Step: Setting Up the Workflow and Installing ComfyUI
This section details the first steps of the set extension process, focusing on setting up a workflow using AI tools. The creator explains how to extract the frame for editing, track footage, and install ComfyUI (a node-based interface for Stable Diffusion and other AI models). Viewers are guided to download necessary models and tools such as Git and the ComfyUI Manager. The tutorial continues with instructions on configuring the workflow in ComfyUI, installing any missing custom nodes, and setting render modes for better visual output. Lastly, the user is shown how to select areas in the footage for set extension using mask editing tools.
🌇 Generating the Set Extension: Adding Realistic Details
The creator demonstrates how to add specific elements (like a ruined spaceship) to a scene using AI-generated images. By tweaking prompts and settings, they show how to achieve desired results, such as removing unwanted elements from the generated images. The workflow automatically considers light direction, shadows, and atmospheric depth to ensure realism. The explanation continues with a breakdown of how the workflow blends new elements with original footage through scaling, masking, and line art extraction. A low denoising setting is recommended for maintaining consistency in the final result.
🖼️ Enhancing Image Composition: Scaling, Masking, and Control Nets
In this part, the workflow is further explained by covering how scaling down to HD resolution enhances composition when generating images with SDXL models. The workflow uses positive and negative prompts to control what elements appear or disappear. Two groups are used for mask creation: one for blending seams, and another to apply control nets that help blend the new image with the original. Control nets for line art and image references ensure that the generated images match the style of the original footage. The section also explores how different settings, such as denoising and mask thresholds, influence image integration.
🏠 Integrating 3D Models in Blender for Set Extensions
This section explains how to use a 3D model to create set extensions for scenes closer to the camera, using Blender for 3D tracking and rendering. The creator walks viewers through the process of tracking footage, solving camera motion, and setting up a 3D scene by modeling simple geometry. The model is projected into the scene, and additional render passes are used for AI image generation. This part highlights the use of mist and line art render passes, which help the AI understand depth and details, enhancing the realism of the generated image. The workflow is ideal for scenes where objects are integrated with camera movement.
🎨 Creating Render Passes for AI Image Generation
Here, the focus shifts to creating render passes like depth (mist) and line art in Blender, which are used by the AI to generate accurate images. The creator explains how to set up color management and activate passes like freestyle for line art. Viewers are shown how to adjust grayscale values to emphasize parts of the 3D model (like a house) for better AI processing. The workflow also uses UV projections to ensure that generated textures align correctly with 3D geometry. The importance of keeping the workflow flexible and customizable is emphasized, allowing for different levels of detail and creative freedom.
🏡 3D Texture Projection and Integration in Blender
This section dives deeper into the texture projection process in Blender. The creator demonstrates how to apply the AI-generated images as textures on 3D models using UV projection. They also share tips for avoiding issues like texture stretching by increasing subdivisions. The projected textures are then composited with the original footage, seamlessly integrating the AI-generated elements into the final render. Blender’s shading workspace is used to apply emission shaders to the model, ensuring that the textures fit the intended look. This part of the tutorial offers flexibility to adjust the texture as needed for specific scenes.
🎥 Final Touches: Compositing and Rendering in After Effects
In this final step, the creator shows how to take the rendered images and composite them into the original footage using After Effects. They demonstrate masking techniques to blend elements like grass or terrain with the AI-generated model. By adding blur and sharpen effects, the footage is adjusted to match the original’s compression and visual quality. The creator advises against compositing over grass, which can be difficult, but praises the overall result. They also showcase how easily new images can be swapped into the scene to explore different creative possibilities, such as changing a farmhouse into a post-apocalyptic shed.
🏢 Combining 2D and 3D Workflows for Dynamic Set Extensions
This section explores combining 2D set extensions with 3D elements for more complex compositions. The example given is of adding a 2D-generated skyline and then integrating a 3D office building into the same shot. Using Blender, the creator demonstrates how to export render passes for the new 3D element, project textures onto geometry, and seamlessly blend the 2D and 3D elements in After Effects. With rough masks and a few compositing tricks, the result is a highly dynamic and realistic set extension that can be easily adjusted with different prompts or model changes.
🎉 Final Thoughts and Encouragement for Creators
The video concludes with a message of encouragement to viewers, urging them to explore the workflows and create their own set extensions. The creator expresses excitement about the possibilities these techniques unlock and invites users to share their work. The video serves as a deep dive into AI-assisted filmmaking, emphasizing how powerful and accessible these tools can be for enhancing visual projects. The creator also reminds viewers of the exclusive resources available to Patreon supporters, further fostering a community of creators experimenting with AI tools.
Mindmap
Keywords
💡Set Extension
💡ComfyUI
💡Stable Diffusion
💡3D Model Integration
💡ControlNet
💡Camera Tracking
💡Prompt
💡Render Pass
💡Upscaling
💡Masking
Highlights
Developed two AI-based workflows for set extensions: one for adding elements to selected areas of an image and another for integrating 3D models seamlessly.
The AI considers light direction, colors, and the general style of the original footage to integrate 3D models naturally.
Step-by-step guidance is provided on how to set up and use these workflows, starting with footage preparation and tracking.
Introduced the use of 'comfyUI,' a node-based interface for Stable Diffusion and other AI models, which enhances the ease of workflow setup.
Detailed instructions on installing and setting up 'comfyUI,' including the necessary downloads like checkpoints and control net models.
Highlighted the process of masking, which allows adding set extensions by defining specific areas where elements should be generated.
Demonstrated how the workflow accurately integrates elements, considering shadows, sunlight, and atmospheric effects for realistic results.
Described the creation of masks to seamlessly blend newly generated images with the original composition using AI-controlled blurring techniques.
Showcased how a 2D-generated set extension can be upscaled and integrated into the original high-resolution footage without manual compositing.
Provided a 3D matte painting workflow example using Blender, including camera tracking, modeling, and integrating 3D objects with AI-generated textures.
Rendered passes like depth and line art passes are used to transfer scene geometry into the AI, allowing more accurate integration of textures.
The 3D workflow enables quick texture changes, allowing elements to be retextured or altered in appearance through simple adjustments in prompts.
Explained how the workflow can combine 2D and 3D set extensions to create complex scenes, such as adding foreground and background elements seamlessly.
Showed how adjustments to line art in image editors can enhance details, making the AI-generated elements look more broken, aged, or weathered.
Encouraged experimentation and creativity, highlighting that the workflows can generate professional-looking set extensions for movies or other projects with AI.
Transcripts
I've developed two workflows to help you
create amazing set extensions for your
projects the first one allows you to
select an area of your image and add
anything you want kind of like
photoshop's generator field but much
better and free and with the second one
you can even input a 3D model and the AI
will integrate it seamlessly taking into
account the light Direction the colors
and the general style of the original
footage and I'm still Blown Away by how
well this works and I can't wait to see
all the amazing projects that you're
going to create with it so I'm going to
show you step by step how to set up and
use these free workflows but first let
me quickly talk about the sponsor of
this video it's you thank you so much
for watching these videos and thank you
to my lovely patreon supporters who make
these videos possible if you want access
to exclusive workflows and resources and
an awesome AI Discord Community click
the link in the description and now
let's get
started let's say we're working on a
movie set in a bustling Metropolis like
New York but but we only have footage of
this town because we found it for free
on pixels still we really want to use
this as our establishing shot let's use
my workflow to transform it first use an
editing tool of your choice to save out
the frame where we want to add our set
extension in my case that's frame number
one and if you want you can already
track your footage I'm tracking the
camera and after effect but you can also
use the point tracker in daav Vinci
resolve or do a full 3d track of your
shot in blender and I'll show you these
options in more detail later but for now
now let's just click track camera and
let After Effects do its thing now we
need to install com UI a note-based
interface for stable diffusion and other
AI models and I created this free
stepbystep guide on how to install it
and where to download and put all the
models for everything to work so just go
to the official GitHub page scroll down
and download while that's downloading we
can install git I've already done this
but you just need to install the
Standalone version once comfyi is
downloaded you can put it anywhere you
like and
extracted and this extracted folder is
now your comi directory next you want to
download the com VII manager so just go
to the manager GitHub page scroll down
and right click on this link save link
as and put it inside of your comi folder
click save and once it's downloaded just
run
it you can then click run Nvidia GPU and
comi will start in your browser but now
we need a few models first we need our
checkpoint and this is just the base
model that we are going to use and I'm
using Wild Card turbo for this so just
go to the link right click save link as
go to comi models checkpoints and click
save let's download the first control
net the misto line right click save link
as go back to models control net and I
like to create a new folder for sdxl
models and just click save let's
download the second control net do the
same thing right click save link as and
we're already in the folder just click
save now go to the model manager and
search for Ultra and install this one
right here and that's it let's quickly
restart com UI and now you can just drag
and drop my workflows into the com UI
interface but you can see that a lot of
notes are missing but that's not a
problem just go to the manager and click
install missing custom notes select all
of them and click install and wait for
it to finish once it's done click
restart and wait for the installation to
finish and you can see our workflow is
here and ready to use but first let's go
to the settings here and change the link
render mode to straight just looks a bit
cleaner that way and I also want to go
to the manager and activate the preview
method Laten to RGB we're working from
left to right so let's start at the top
left corner here here you just need to
drag and drop the frame you just
exported now you can just right click
and open in mask editor and now let's
increase the thickness and I'm just
selecting the area where I want to add
our set extension to you can rightclick
to delete areas and once you're happy
with it just click save to note scroll
down and add a prompt and let's try
something crazy here let's add a
spaceship ruin overgrown broken sinking
into the ground and now you can just
click Q prompt and the image will start
generating and this first image already
looks really good but for the right part
here of the image it just added this
mountain here and I don't like that so
let's just try another seat and this
already looks really
cool yeah that's exactly what I had in
mind what I really love about this
workflow is that it understands the
whole image so you can see the sunlight
is actually coming from the right
direction it's casting the correct
shadows and even like the parts in the
image that are further away are behind
the atmosphere they have higher black
values which really adds to the realism
that we are going for so you can see
this workflow even though it looks
complicated is actually quite easy to
use but let me quickly walk you through
the whole thing so you really understand
what's happening in the first group here
first we scale the image down to an HD
resolution and that's just because the
composition looks much better with sdxl
when we create the image at an age
resolution so the idea is we scale the
image down and then we generate the
image and later we will upscale it again
I think I don't have to say too much to
The Prompt group here so we have a
positive prompt add all the things that
you want to see and a negative prompt
all the things that you don't want to
see in the image up here we have two
groups that will create masks for us the
first one will just take the mask that
you painted and just blur it a little
bit so that the seams are not as visible
if you want you can increase the radius
here but this usually works really well
down here we have a mask that will
select only the edges of the mask that
you created and this will be fed into a
line art control net this one will
extract the lines from the original
image and then apply this control net at
the edges of your mask and this just
helps to blend the newly generated image
with the original composition of the
image next to that we have a reference
control net that will just take the
original image as a reference for the
newly generated image so that the style
matches this all gets fed into to the K
sampler here it will generate an image
and in the next step it will be upscaled
and added on top of the original 4K
image you can play around with the D
noising value here a higher D noise will
add more detail but it can also break
the scale a little bit so I would
recommend keeping it quite low actually
this last mask setup will look at all
the parts of your image that have
changed and will try to isolate them
here you can play around with the
threshold and The Mask erode regions if
you have too many many of these spots
right here like you can see if I set it
too
low we have all these extra tiny spots
here resulting in a mask that looks like
this so we don't even have to create a
manual compositing mask in After Effects
we can just go back to the original
frame I'll just select a tracking marker
that is as far away as I want the set
extension to be then I import my PNG
image now we just need to make it 3D and
I'll just copy the position of the track
that I created change the orientation so
roughly like this and if we now click
play you can see that it stays in the
correct place now this workflow works
perfectly well for all types of shots
where the set extension is really far
away so we don't have any 3d parallx
effect but what if we want to add
something that's closer to the camera
look at this shot I took with my phone
for example I created a 3D track in
After Effects and now I want to add a
cozy Farmhouse so I generated this image
with comu ey so I'm using the exact same
technique and we can see that it works
okay for for a short amount of time but
then it falls apart and we can see that
this is just a 2D image so now let's use
my 3D Med painting workflow first we
must track the 3D camera so in blender I
delete everything go to the VFX motion
tracking workspace and open my clip I
click prefetch and set scene frames
switch the motion model to a fine
activate normalize bump the correlation
up
2.9 margin to
20 go to the
start and track the features add more
features in the end and track in the
other direction now I should have enough
tracks to work with I delete the ones
that look broken go to
solve and select two key frames where I
feel like that the parallx is strongest
so maybe 40 and 100 I check all the
refine options and click solve camera
motion this gives me a solve error of
0.5 which is actually really really good
so now I can click set up tracking scene
I'll select three tracking markers that
are on my ground and click
floor and I select one where I want the
house to be I think it's going to be
roughly here so I select that one and
click set origin now comes the fun part
modeling the house I use Simple box
modeling for this and I try to keep it
as simple as possible because the AI
textures will do a lot of the heavy
lifting but you can be as detailed and
precise as you like here I'll also leave
in this ground plane here because we
have this hard sunlight and I'm hoping
to catch some of the Shadows created by
the house so I can better blend the
images together now that we have a final
3D model we need a way to transfer the
scene geometry to the AI and if you
watch my previous videos and generating
full 3d environments with AI or
rendering with AI you know that we are
going to use render passes so first
let's make sure that our scene
resolution matches the original footage
and we also want to go to the render
properties and make sure that color
management is set to standard next we
want to activate the Mist pass and in my
case I want to go to the first frame and
click render image then let's go to the
compositing workflow and create a viewer
node and connect the Mist pass to the
viewer node and you can see this is
pretty much a depth pass where white
pixels are far away and black pixels are
close to the camera we actually want to
have that the other way around so let's
add an invert Noe and now we want to
focus all the information that we have
all the grayscale values on the house so
I add a curve node I shift the value so
that the front is fully white and the
back of our geometry is disappearing in
the darkness finally we can create a f
output node and select a location where
we want to save out our image and when
we now render the image we have a really
good depth pass so we could now use this
depth path to generate an image but you
can see that the window and the door is
really hard to see so this information
might not transfer during image
generation so we need to create another
render pass a line art pass for this we
just go to render and activate the
freestyle tool next you want to go to
view layer scroll down and under
freestyle activate as render pass go to
Freestyle color and change that to White
and when we now render our image and go
to the compositing workspace and connect
our viewer to the freestyle output we
have these really cool outlines now just
add an alpha over node connect the
freestyle to the second input and make
the first one black and add another file
output node finally we want to go to
render film and check
transparent and add another file output
node to the alpha output of of our image
when we now render our image we have
these three
passes now let's switch back to comu ey
and import the 3D version of the
workflow and using it is really simple
you just need to import the alpha render
pass here the original frame here down
here again you just add the prompt for
what you want to generate in this case a
cozy farmhouse
next you can just add the line art pass
here and the depth pass
here again make sure that we have the
Wild Card model selected here the mistol
line model here and we want to use the
Promax model here and then you can just
click Q prompt again and you can see it
will start generating a
house and this is integrating really
well but it's not quite what I had in
mind so let's try a few more seats oh
yes this is more like you can already
see this is integrating really well you
can see the resolution is quite low so
the next step will just upscale this
image at the moments it's set to two but
we can actually change that to four and
again you can play around with the D
noise value depending on how creative
you want the upscaler to be when
generating the images you can also play
around with the control net strengths
down here generally you want to keep
them quite low because this is allowing
for a more creative image generation but
the lower you set it the less L it will
also stick to your original geometry you
so you kind of have to find a middle way
but as you can see these values usually
work really well as a starting point so
now we can already switch back to
blender here I want to select my house
and create a new Shader and let's call
that projection go to the shading
workspace delete the principled bsdf and
create an emission Shader and connect it
and then I'm just going to drag and drop
the upscaled image in here and connect
it to the color now I can go to layout
make sure that you're on the right frame
in my case I created the render pass on
the first frame so let's go to the first
frame go to edit mode and select all the
faces and click UV project from
View and now the texture matches up
perfectly we can repeat the same steps
for the rest of the geometry so for the
floor I'm also just switching to the
projection Shader go to edit select
everything and click project from
view if you have some weird stretching
going on or it's not looking correct
it's probably because you don't have
enough subdivisions but you can always
just add more and reproject it next next
make sure that your composite output is
connected to the image down here and
render out the sequence oh and if it's
really slow like this make sure to
deactivate the freestyle tool and you
can also click M to mute all the other
file outputs because we don't need all
the render passes for all the other
frames so now I click render again and
it's a lot faster I then brought the
footage over to After Effects and
created a rough mask for the ground
plane just by keying the grass and
blurring the edges I also added blur and
sharpen effects to match the iPhone's
extreme compression and I highly
recommend you don't try to integrate
something into grass uh that was really
annoying but I mean you get the idea I
think the effect looks really cool and
with some extra compositing it could
look amazing another cool thing about
this work is that we can very quickly
try out different textures let's say we
don't want this cozy looking farmhous
but like a post-apocalyptic shed for
this I can just go back into comu eye
change the prompt generate a new image
switch it out in blender rerender the
sequence and then I can just drag and
drop it onto the previous clip in After
Effects and it's already integrated and
this looks really good but let's say we
want to add more detail we can simply do
that by opening the line art in an image
editor like Photoshop for example and
just painting new lines on top of the
image I can then just switch out the
image in com VI eye and generate a new
one and now the shed looks really really
broken and of course we can also combine
these two workflows so that we can add
like 2D set extensions for the
background and a 3D set set extension
for the foreground let's go back to the
shot for example where we added the
skyline we added the skyline and it
looks really good but let's say we want
an office building in front of it a
giant office tower so let's just add a
few boxes to the scene export our render
passes throw them into comfu eye add a
prompt for like an office building
generate a few images select one project
it in blender render the sequence and
throw it into After Effects create some
rough masks for the foreground and it
just looks really
cool so I hope you're as excited as I am
about this technique and I hope you try
it out if you like these AI deep Dives
want to support my work and gain access
to exclusive example files like for
example the blender files consider
supporting me on patreon so thank you
very much for watching I hope you will
create something amazing with these
workflows make sure to tag me in your
work or share it with me I always love
to see what you come up with and see you
next time for the next AI workflow
浏览更多相关视频
23 AI Tools You Won't Believe are Free
AutoGen Studio 2.0 Advanced Tutorial | Build multi-agent GenAI Application!!
🔥 Top 10 AI Tools That Will Change Your Work in 2024 | 10 Crazy AI Tools To Boost Your Productivity
The ONLY AI tool Architects will Need | PromeAI | Step-by-Step Guide
Part 1-Blender Beginner Tutorial (Basic Navigation, Shortcut Keys)
Will this AI Replace 3D Modeling?
5.0 / 5 (0 votes)