Generate STUNNING SET EXTENSIONS For Your Projects! [2D & 3D | FREE Blender + AI]

Mickmumpitz
16 Sept 202417:16

Summary

TLDRThis video introduces two AI-driven workflows for creating seamless set extensions in film projects. The first workflow allows users to add any object to an image, similar to Photoshop’s tools but more advanced and free. The second workflow integrates 3D models into footage, considering lighting, colors, and style for a realistic result. The video provides a detailed step-by-step guide on how to install and use the required software, including setting up COM UI and managing models, and shows how to implement these workflows for both 2D and 3D set extensions in film scenes.

Takeaways

  • πŸ˜€ Two free workflows are introduced for creating set extensions: one for selecting and adding items to images, and the other for integrating 3D models seamlessly.
  • πŸ’» The first workflow functions similarly to Photoshop's generator field, allowing users to select areas of an image and add desired objects.
  • 🌐 The second workflow allows for the input of 3D models, considering light direction, color, and style to blend them into footage.
  • πŸ›  Users need to install Com UI, a node-based interface for AI models, and follow a step-by-step guide to download models and tools.
  • πŸ”§ The process includes tracking footage using After Effects, DaVinci Resolve, or Blender for camera tracking, followed by applying the set extension.
  • 🌍 A demonstration of transforming a city image into a sci-fi scene with an overgrown spaceship ruin is shown as part of the workflow.
  • πŸŽ₯ Users can generate high-quality image extensions, blending them with original footage by adjusting prompts, masks, and other settings.
  • 🏠 A second 3D workflow is demonstrated for adding a 3D house to footage, using render passes in Blender to integrate AI-generated textures and models.
  • πŸ“ˆ The set extension workflows support scaling, depth passes, and line art for maintaining consistency in image resolution and style.
  • πŸ€– Users can easily swap textures or prompts to generate different variations of objects, such as turning a cozy farmhouse into a post-apocalyptic shed.

Q & A

  • What are the two workflows introduced in the video?

    -The first workflow allows you to select an area of your image and add anything you want, similar to Photoshop's generator field but better and free. The second workflow lets you input a 3D model, and the AI seamlessly integrates it, considering light direction, colors, and the general style of the original footage.

  • What are some key tools mentioned for tracking footage?

    -The video mentions tools like After Effects for camera tracking, DaVinci Resolve for point tracking, and Blender for full 3D tracking.

  • What is ComfyUI, and why is it important in the workflow?

    -ComfyUI is a node-based interface for Stable Diffusion and other AI models. It is crucial because the workflows introduced rely on ComfyUI to generate and integrate AI-powered set extensions or 3D elements seamlessly.

  • What models are required to run the workflows?

    -The models mentioned include the Wild Card Turbo checkpoint, two control net models, and the Ultra model. These are essential for generating images and blending them with the original footage.

  • How does the AI handle light direction and realism in the image generation process?

    -The AI understands the entire image, including elements like sunlight direction, casting correct shadows, and adjusting black values for distant parts of the image. This helps maintain realism and visual coherence.

  • How do you generate a set extension using a specific frame in ComfyUI?

    -First, export the desired frame, then select the area for the set extension in the mask editor. Input a prompt describing the extension (e.g., 'spaceship ruin'), and the AI generates an image that integrates seamlessly into the original footage.

  • What steps are taken to avoid visible seams between the original footage and the generated image?

    -The workflow uses masks to blur the seams, helping to blend the newly generated image with the original. Line art and reference control nets also help by extracting lines and using the original image as a reference for consistency.

  • How does the 3D set extension process differ from the 2D one?

    -For 3D set extensions, you need to track the 3D camera using software like Blender, model a simple geometry for the extension, and export render passes (e.g., depth and line art) that help the AI generate a 3D image, which is later integrated into the scene.

  • What is the purpose of render passes in the 3D workflow?

    -Render passes, such as the depth pass and line art pass, provide the AI with detailed geometry and spatial information. These passes help the AI accurately place and integrate the 3D extension into the original footage.

  • How can the generated textures be projected onto 3D models in Blender?

    -After generating the texture, you can create a projection shader in Blender, UV project the texture onto the 3D model, and match it to the original footage. This ensures that the texture aligns perfectly with the geometry and camera view.

Outlines

00:00

πŸš€ Introducing Two Free Workflows for Set Extensions

In this section, the creator introduces two innovative workflows designed to enhance set extensions in film projects. The first workflow allows users to select areas of an image and add elements, similar to Photoshop's generator field but free and more powerful. The second workflow enables seamless integration of 3D models into footage, taking into account lighting, colors, and style for realistic results. The creator expresses excitement about the workflows' potential and offers a step-by-step guide for using them, thanking viewers and Patreon supporters before diving into a project example of transforming footage from a small town into a bustling cityscape.

05:01

🎬 Step-by-Step: Setting Up the Workflow and Installing ComfyUI

This section details the first steps of the set extension process, focusing on setting up a workflow using AI tools. The creator explains how to extract the frame for editing, track footage, and install ComfyUI (a node-based interface for Stable Diffusion and other AI models). Viewers are guided to download necessary models and tools such as Git and the ComfyUI Manager. The tutorial continues with instructions on configuring the workflow in ComfyUI, installing any missing custom nodes, and setting render modes for better visual output. Lastly, the user is shown how to select areas in the footage for set extension using mask editing tools.

10:02

πŸŒ‡ Generating the Set Extension: Adding Realistic Details

The creator demonstrates how to add specific elements (like a ruined spaceship) to a scene using AI-generated images. By tweaking prompts and settings, they show how to achieve desired results, such as removing unwanted elements from the generated images. The workflow automatically considers light direction, shadows, and atmospheric depth to ensure realism. The explanation continues with a breakdown of how the workflow blends new elements with original footage through scaling, masking, and line art extraction. A low denoising setting is recommended for maintaining consistency in the final result.

15:03

πŸ–ΌοΈ Enhancing Image Composition: Scaling, Masking, and Control Nets

In this part, the workflow is further explained by covering how scaling down to HD resolution enhances composition when generating images with SDXL models. The workflow uses positive and negative prompts to control what elements appear or disappear. Two groups are used for mask creation: one for blending seams, and another to apply control nets that help blend the new image with the original. Control nets for line art and image references ensure that the generated images match the style of the original footage. The section also explores how different settings, such as denoising and mask thresholds, influence image integration.

🏠 Integrating 3D Models in Blender for Set Extensions

This section explains how to use a 3D model to create set extensions for scenes closer to the camera, using Blender for 3D tracking and rendering. The creator walks viewers through the process of tracking footage, solving camera motion, and setting up a 3D scene by modeling simple geometry. The model is projected into the scene, and additional render passes are used for AI image generation. This part highlights the use of mist and line art render passes, which help the AI understand depth and details, enhancing the realism of the generated image. The workflow is ideal for scenes where objects are integrated with camera movement.

🎨 Creating Render Passes for AI Image Generation

Here, the focus shifts to creating render passes like depth (mist) and line art in Blender, which are used by the AI to generate accurate images. The creator explains how to set up color management and activate passes like freestyle for line art. Viewers are shown how to adjust grayscale values to emphasize parts of the 3D model (like a house) for better AI processing. The workflow also uses UV projections to ensure that generated textures align correctly with 3D geometry. The importance of keeping the workflow flexible and customizable is emphasized, allowing for different levels of detail and creative freedom.

🏑 3D Texture Projection and Integration in Blender

This section dives deeper into the texture projection process in Blender. The creator demonstrates how to apply the AI-generated images as textures on 3D models using UV projection. They also share tips for avoiding issues like texture stretching by increasing subdivisions. The projected textures are then composited with the original footage, seamlessly integrating the AI-generated elements into the final render. Blender’s shading workspace is used to apply emission shaders to the model, ensuring that the textures fit the intended look. This part of the tutorial offers flexibility to adjust the texture as needed for specific scenes.

πŸŽ₯ Final Touches: Compositing and Rendering in After Effects

In this final step, the creator shows how to take the rendered images and composite them into the original footage using After Effects. They demonstrate masking techniques to blend elements like grass or terrain with the AI-generated model. By adding blur and sharpen effects, the footage is adjusted to match the original’s compression and visual quality. The creator advises against compositing over grass, which can be difficult, but praises the overall result. They also showcase how easily new images can be swapped into the scene to explore different creative possibilities, such as changing a farmhouse into a post-apocalyptic shed.

🏒 Combining 2D and 3D Workflows for Dynamic Set Extensions

This section explores combining 2D set extensions with 3D elements for more complex compositions. The example given is of adding a 2D-generated skyline and then integrating a 3D office building into the same shot. Using Blender, the creator demonstrates how to export render passes for the new 3D element, project textures onto geometry, and seamlessly blend the 2D and 3D elements in After Effects. With rough masks and a few compositing tricks, the result is a highly dynamic and realistic set extension that can be easily adjusted with different prompts or model changes.

πŸŽ‰ Final Thoughts and Encouragement for Creators

The video concludes with a message of encouragement to viewers, urging them to explore the workflows and create their own set extensions. The creator expresses excitement about the possibilities these techniques unlock and invites users to share their work. The video serves as a deep dive into AI-assisted filmmaking, emphasizing how powerful and accessible these tools can be for enhancing visual projects. The creator also reminds viewers of the exclusive resources available to Patreon supporters, further fostering a community of creators experimenting with AI tools.

Mindmap

Keywords

πŸ’‘Set Extension

Set extension refers to the technique of expanding or altering a film set or scene using digital tools. In the video, this concept is central to the workflow being introduced. The speaker explains how users can enhance simple footage by adding elements like buildings, landscapes, or even spaceships, thereby creating more complex visual environments using AI.

πŸ’‘ComfyUI

ComfyUI is a node-based interface for stable diffusion and other AI models. The video provides a step-by-step guide for installing ComfyUI, which plays a crucial role in helping users generate and integrate images into their footage seamlessly. The workflow revolves around using this tool to handle various stages of the image generation and set extension process.

πŸ’‘Stable Diffusion

Stable Diffusion is a deep learning model used to generate high-quality images from text descriptions. It is mentioned in the video as part of the toolchain for creating realistic set extensions. The speaker uses Stable Diffusion to generate images that align with the original footage, blending new elements like spaceships or buildings into the scene.

πŸ’‘3D Model Integration

This concept involves inserting 3D models into footage and blending them with the scene using AI. In the video, the speaker discusses how AI can seamlessly integrate 3D models by taking into account factors like lighting, colors, and the overall style of the original footage. The workflow makes it easy to add complex 3D structures, such as a farmhouse, into video scenes.

πŸ’‘ControlNet

ControlNet is a neural network used to control AI-generated images by setting specific parameters like line art and depth. The speaker in the video explains how ControlNet is employed to refine the details of the generated images, ensuring that elements like lighting and shadows are accurate and align with the original footage. This helps maintain visual realism in the final output.

πŸ’‘Camera Tracking

Camera tracking refers to the process of analyzing the motion of a camera within a shot and replicating it in a digital environment. In the video, this is essential for ensuring that set extensions and 3D elements stay in the correct position within the frame, even when the camera moves. The speaker provides options for tracking in After Effects, DaVinci Resolve, or Blender.

πŸ’‘Prompt

A prompt is a text description used to guide AI in generating images. In the video, prompts are a key part of the workflow, as users input descriptions (e.g., 'spaceship ruin overgrown') to tell the AI what kind of image to create. The quality and specificity of prompts affect how well the generated image integrates with the original footage.

πŸ’‘Render Pass

Render passes are individual layers of image data, such as lighting, shadows, or depth, that can be combined to create a final image. In the video, the speaker uses multiple render passes (like depth and line art) to ensure that the AI-generated set extensions align perfectly with the original scene. This is particularly important when adding 3D elements to the footage.

πŸ’‘Upscaling

Upscaling refers to the process of increasing the resolution of an image while maintaining quality. The speaker in the video explains how the workflow generates images at a lower resolution for composition purposes and then upscales them later to match the original 4K footage. This helps with maintaining detail and ensuring that the generated elements look realistic in the final video.

πŸ’‘Masking

Masking involves isolating parts of an image so that certain effects or adjustments can be applied only to specific areas. In the video, masking is used to define the parts of the image where set extensions will be added. The speaker also demonstrates how to refine masks using AI tools like ControlNet to make the integration of new elements smoother and more seamless.

Highlights

Developed two AI-based workflows for set extensions: one for adding elements to selected areas of an image and another for integrating 3D models seamlessly.

The AI considers light direction, colors, and the general style of the original footage to integrate 3D models naturally.

Step-by-step guidance is provided on how to set up and use these workflows, starting with footage preparation and tracking.

Introduced the use of 'comfyUI,' a node-based interface for Stable Diffusion and other AI models, which enhances the ease of workflow setup.

Detailed instructions on installing and setting up 'comfyUI,' including the necessary downloads like checkpoints and control net models.

Highlighted the process of masking, which allows adding set extensions by defining specific areas where elements should be generated.

Demonstrated how the workflow accurately integrates elements, considering shadows, sunlight, and atmospheric effects for realistic results.

Described the creation of masks to seamlessly blend newly generated images with the original composition using AI-controlled blurring techniques.

Showcased how a 2D-generated set extension can be upscaled and integrated into the original high-resolution footage without manual compositing.

Provided a 3D matte painting workflow example using Blender, including camera tracking, modeling, and integrating 3D objects with AI-generated textures.

Rendered passes like depth and line art passes are used to transfer scene geometry into the AI, allowing more accurate integration of textures.

The 3D workflow enables quick texture changes, allowing elements to be retextured or altered in appearance through simple adjustments in prompts.

Explained how the workflow can combine 2D and 3D set extensions to create complex scenes, such as adding foreground and background elements seamlessly.

Showed how adjustments to line art in image editors can enhance details, making the AI-generated elements look more broken, aged, or weathered.

Encouraged experimentation and creativity, highlighting that the workflows can generate professional-looking set extensions for movies or other projects with AI.

Transcripts

play00:01

I've developed two workflows to help you

play00:02

create amazing set extensions for your

play00:05

projects the first one allows you to

play00:07

select an area of your image and add

play00:09

anything you want kind of like

play00:10

photoshop's generator field but much

play00:12

better and free and with the second one

play00:14

you can even input a 3D model and the AI

play00:17

will integrate it seamlessly taking into

play00:19

account the light Direction the colors

play00:21

and the general style of the original

play00:23

footage and I'm still Blown Away by how

play00:26

well this works and I can't wait to see

play00:28

all the amazing projects that you're

play00:29

going to create with it so I'm going to

play00:31

show you step by step how to set up and

play00:33

use these free workflows but first let

play00:34

me quickly talk about the sponsor of

play00:36

this video it's you thank you so much

play00:38

for watching these videos and thank you

play00:39

to my lovely patreon supporters who make

play00:41

these videos possible if you want access

play00:43

to exclusive workflows and resources and

play00:46

an awesome AI Discord Community click

play00:48

the link in the description and now

play00:50

let's get

play00:54

started let's say we're working on a

play00:56

movie set in a bustling Metropolis like

play00:58

New York but but we only have footage of

play01:01

this town because we found it for free

play01:03

on pixels still we really want to use

play01:05

this as our establishing shot let's use

play01:07

my workflow to transform it first use an

play01:09

editing tool of your choice to save out

play01:11

the frame where we want to add our set

play01:13

extension in my case that's frame number

play01:15

one and if you want you can already

play01:17

track your footage I'm tracking the

play01:19

camera and after effect but you can also

play01:21

use the point tracker in daav Vinci

play01:23

resolve or do a full 3d track of your

play01:25

shot in blender and I'll show you these

play01:27

options in more detail later but for now

play01:29

now let's just click track camera and

play01:31

let After Effects do its thing now we

play01:34

need to install com UI a note-based

play01:36

interface for stable diffusion and other

play01:38

AI models and I created this free

play01:40

stepbystep guide on how to install it

play01:42

and where to download and put all the

play01:43

models for everything to work so just go

play01:46

to the official GitHub page scroll down

play01:49

and download while that's downloading we

play01:51

can install git I've already done this

play01:53

but you just need to install the

play01:55

Standalone version once comfyi is

play01:57

downloaded you can put it anywhere you

play01:59

like and

play02:02

extracted and this extracted folder is

play02:04

now your comi directory next you want to

play02:07

download the com VII manager so just go

play02:10

to the manager GitHub page scroll down

play02:13

and right click on this link save link

play02:15

as and put it inside of your comi folder

play02:18

click save and once it's downloaded just

play02:21

run

play02:22

it you can then click run Nvidia GPU and

play02:26

comi will start in your browser but now

play02:28

we need a few models first we need our

play02:30

checkpoint and this is just the base

play02:32

model that we are going to use and I'm

play02:34

using Wild Card turbo for this so just

play02:37

go to the link right click save link as

play02:40

go to comi models checkpoints and click

play02:45

save let's download the first control

play02:48

net the misto line right click save link

play02:51

as go back to models control net and I

play02:55

like to create a new folder for sdxl

play02:57

models and just click save let's

play03:00

download the second control net do the

play03:02

same thing right click save link as and

play03:06

we're already in the folder just click

play03:08

save now go to the model manager and

play03:11

search for Ultra and install this one

play03:15

right here and that's it let's quickly

play03:17

restart com UI and now you can just drag

play03:20

and drop my workflows into the com UI

play03:22

interface but you can see that a lot of

play03:25

notes are missing but that's not a

play03:28

problem just go to the manager and click

play03:30

install missing custom notes select all

play03:32

of them and click install and wait for

play03:36

it to finish once it's done click

play03:38

restart and wait for the installation to

play03:40

finish and you can see our workflow is

play03:42

here and ready to use but first let's go

play03:44

to the settings here and change the link

play03:46

render mode to straight just looks a bit

play03:49

cleaner that way and I also want to go

play03:51

to the manager and activate the preview

play03:53

method Laten to RGB we're working from

play03:56

left to right so let's start at the top

play03:58

left corner here here you just need to

play04:00

drag and drop the frame you just

play04:02

exported now you can just right click

play04:05

and open in mask editor and now let's

play04:08

increase the thickness and I'm just

play04:11

selecting the area where I want to add

play04:14

our set extension to you can rightclick

play04:16

to delete areas and once you're happy

play04:19

with it just click save to note scroll

play04:22

down and add a prompt and let's try

play04:25

something crazy here let's add a

play04:27

spaceship ruin overgrown broken sinking

play04:30

into the ground and now you can just

play04:32

click Q prompt and the image will start

play04:35

generating and this first image already

play04:37

looks really good but for the right part

play04:39

here of the image it just added this

play04:40

mountain here and I don't like that so

play04:43

let's just try another seat and this

play04:46

already looks really

play04:48

cool yeah that's exactly what I had in

play04:51

mind what I really love about this

play04:52

workflow is that it understands the

play04:54

whole image so you can see the sunlight

play04:57

is actually coming from the right

play04:59

direction it's casting the correct

play05:01

shadows and even like the parts in the

play05:03

image that are further away are behind

play05:05

the atmosphere they have higher black

play05:07

values which really adds to the realism

play05:09

that we are going for so you can see

play05:11

this workflow even though it looks

play05:12

complicated is actually quite easy to

play05:14

use but let me quickly walk you through

play05:16

the whole thing so you really understand

play05:18

what's happening in the first group here

play05:19

first we scale the image down to an HD

play05:22

resolution and that's just because the

play05:24

composition looks much better with sdxl

play05:27

when we create the image at an age

play05:29

resolution so the idea is we scale the

play05:31

image down and then we generate the

play05:33

image and later we will upscale it again

play05:36

I think I don't have to say too much to

play05:37

The Prompt group here so we have a

play05:39

positive prompt add all the things that

play05:41

you want to see and a negative prompt

play05:43

all the things that you don't want to

play05:44

see in the image up here we have two

play05:46

groups that will create masks for us the

play05:49

first one will just take the mask that

play05:50

you painted and just blur it a little

play05:52

bit so that the seams are not as visible

play05:54

if you want you can increase the radius

play05:56

here but this usually works really well

play05:58

down here we have a mask that will

play06:00

select only the edges of the mask that

play06:02

you created and this will be fed into a

play06:04

line art control net this one will

play06:07

extract the lines from the original

play06:09

image and then apply this control net at

play06:12

the edges of your mask and this just

play06:14

helps to blend the newly generated image

play06:17

with the original composition of the

play06:19

image next to that we have a reference

play06:21

control net that will just take the

play06:23

original image as a reference for the

play06:25

newly generated image so that the style

play06:27

matches this all gets fed into to the K

play06:30

sampler here it will generate an image

play06:32

and in the next step it will be upscaled

play06:34

and added on top of the original 4K

play06:37

image you can play around with the D

play06:39

noising value here a higher D noise will

play06:42

add more detail but it can also break

play06:44

the scale a little bit so I would

play06:46

recommend keeping it quite low actually

play06:48

this last mask setup will look at all

play06:50

the parts of your image that have

play06:52

changed and will try to isolate them

play06:54

here you can play around with the

play06:56

threshold and The Mask erode regions if

play06:58

you have too many many of these spots

play07:01

right here like you can see if I set it

play07:02

too

play07:04

low we have all these extra tiny spots

play07:07

here resulting in a mask that looks like

play07:10

this so we don't even have to create a

play07:12

manual compositing mask in After Effects

play07:15

we can just go back to the original

play07:16

frame I'll just select a tracking marker

play07:18

that is as far away as I want the set

play07:20

extension to be then I import my PNG

play07:23

image now we just need to make it 3D and

play07:26

I'll just copy the position of the track

play07:28

that I created change the orientation so

play07:31

roughly like this and if we now click

play07:33

play you can see that it stays in the

play07:35

correct place now this workflow works

play07:37

perfectly well for all types of shots

play07:39

where the set extension is really far

play07:41

away so we don't have any 3d parallx

play07:43

effect but what if we want to add

play07:45

something that's closer to the camera

play07:47

look at this shot I took with my phone

play07:49

for example I created a 3D track in

play07:51

After Effects and now I want to add a

play07:54

cozy Farmhouse so I generated this image

play07:57

with comu ey so I'm using the exact same

play08:00

technique and we can see that it works

play08:04

okay for for a short amount of time but

play08:06

then it falls apart and we can see that

play08:08

this is just a 2D image so now let's use

play08:11

my 3D Med painting workflow first we

play08:13

must track the 3D camera so in blender I

play08:16

delete everything go to the VFX motion

play08:19

tracking workspace and open my clip I

play08:22

click prefetch and set scene frames

play08:25

switch the motion model to a fine

play08:28

activate normalize bump the correlation

play08:30

up

play08:31

2.9 margin to

play08:35

20 go to the

play08:38

start and track the features add more

play08:42

features in the end and track in the

play08:45

other direction now I should have enough

play08:47

tracks to work with I delete the ones

play08:49

that look broken go to

play08:52

solve and select two key frames where I

play08:55

feel like that the parallx is strongest

play08:58

so maybe 40 and 100 I check all the

play09:02

refine options and click solve camera

play09:05

motion this gives me a solve error of

play09:08

0.5 which is actually really really good

play09:11

so now I can click set up tracking scene

play09:14

I'll select three tracking markers that

play09:16

are on my ground and click

play09:19

floor and I select one where I want the

play09:22

house to be I think it's going to be

play09:24

roughly here so I select that one and

play09:26

click set origin now comes the fun part

play09:29

modeling the house I use Simple box

play09:31

modeling for this and I try to keep it

play09:33

as simple as possible because the AI

play09:36

textures will do a lot of the heavy

play09:38

lifting but you can be as detailed and

play09:41

precise as you like here I'll also leave

play09:43

in this ground plane here because we

play09:45

have this hard sunlight and I'm hoping

play09:47

to catch some of the Shadows created by

play09:49

the house so I can better blend the

play09:51

images together now that we have a final

play09:53

3D model we need a way to transfer the

play09:55

scene geometry to the AI and if you

play09:58

watch my previous videos and generating

play10:00

full 3d environments with AI or

play10:02

rendering with AI you know that we are

play10:04

going to use render passes so first

play10:07

let's make sure that our scene

play10:08

resolution matches the original footage

play10:10

and we also want to go to the render

play10:12

properties and make sure that color

play10:14

management is set to standard next we

play10:17

want to activate the Mist pass and in my

play10:20

case I want to go to the first frame and

play10:23

click render image then let's go to the

play10:26

compositing workflow and create a viewer

play10:28

node and connect the Mist pass to the

play10:31

viewer node and you can see this is

play10:33

pretty much a depth pass where white

play10:35

pixels are far away and black pixels are

play10:38

close to the camera we actually want to

play10:40

have that the other way around so let's

play10:42

add an invert Noe and now we want to

play10:45

focus all the information that we have

play10:47

all the grayscale values on the house so

play10:50

I add a curve node I shift the value so

play10:53

that the front is fully white and the

play10:54

back of our geometry is disappearing in

play10:56

the darkness finally we can create a f

play10:59

output node and select a location where

play11:01

we want to save out our image and when

play11:03

we now render the image we have a really

play11:05

good depth pass so we could now use this

play11:08

depth path to generate an image but you

play11:09

can see that the window and the door is

play11:12

really hard to see so this information

play11:14

might not transfer during image

play11:16

generation so we need to create another

play11:18

render pass a line art pass for this we

play11:21

just go to render and activate the

play11:24

freestyle tool next you want to go to

play11:26

view layer scroll down and under

play11:29

freestyle activate as render pass go to

play11:32

Freestyle color and change that to White

play11:36

and when we now render our image and go

play11:38

to the compositing workspace and connect

play11:41

our viewer to the freestyle output we

play11:43

have these really cool outlines now just

play11:46

add an alpha over node connect the

play11:48

freestyle to the second input and make

play11:50

the first one black and add another file

play11:54

output node finally we want to go to

play11:56

render film and check

play12:00

transparent and add another file output

play12:03

node to the alpha output of of our image

play12:06

when we now render our image we have

play12:08

these three

play12:10

passes now let's switch back to comu ey

play12:13

and import the 3D version of the

play12:15

workflow and using it is really simple

play12:17

you just need to import the alpha render

play12:19

pass here the original frame here down

play12:24

here again you just add the prompt for

play12:26

what you want to generate in this case a

play12:28

cozy farmhouse

play12:30

next you can just add the line art pass

play12:32

here and the depth pass

play12:35

here again make sure that we have the

play12:38

Wild Card model selected here the mistol

play12:41

line model here and we want to use the

play12:43

Promax model here and then you can just

play12:46

click Q prompt again and you can see it

play12:48

will start generating a

play12:51

house and this is integrating really

play12:53

well but it's not quite what I had in

play12:55

mind so let's try a few more seats oh

play12:58

yes this is more like you can already

play12:59

see this is integrating really well you

play13:01

can see the resolution is quite low so

play13:03

the next step will just upscale this

play13:06

image at the moments it's set to two but

play13:08

we can actually change that to four and

play13:11

again you can play around with the D

play13:13

noise value depending on how creative

play13:15

you want the upscaler to be when

play13:17

generating the images you can also play

play13:19

around with the control net strengths

play13:21

down here generally you want to keep

play13:23

them quite low because this is allowing

play13:25

for a more creative image generation but

play13:27

the lower you set it the less L it will

play13:29

also stick to your original geometry you

play13:32

so you kind of have to find a middle way

play13:34

but as you can see these values usually

play13:36

work really well as a starting point so

play13:39

now we can already switch back to

play13:40

blender here I want to select my house

play13:42

and create a new Shader and let's call

play13:45

that projection go to the shading

play13:47

workspace delete the principled bsdf and

play13:49

create an emission Shader and connect it

play13:51

and then I'm just going to drag and drop

play13:53

the upscaled image in here and connect

play13:56

it to the color now I can go to layout

play13:59

make sure that you're on the right frame

play14:01

in my case I created the render pass on

play14:04

the first frame so let's go to the first

play14:06

frame go to edit mode and select all the

play14:10

faces and click UV project from

play14:13

View and now the texture matches up

play14:17

perfectly we can repeat the same steps

play14:19

for the rest of the geometry so for the

play14:21

floor I'm also just switching to the

play14:23

projection Shader go to edit select

play14:27

everything and click project from

play14:29

view if you have some weird stretching

play14:32

going on or it's not looking correct

play14:33

it's probably because you don't have

play14:35

enough subdivisions but you can always

play14:37

just add more and reproject it next next

play14:39

make sure that your composite output is

play14:41

connected to the image down here and

play14:44

render out the sequence oh and if it's

play14:47

really slow like this make sure to

play14:48

deactivate the freestyle tool and you

play14:51

can also click M to mute all the other

play14:54

file outputs because we don't need all

play14:56

the render passes for all the other

play14:58

frames so now I click render again and

play15:00

it's a lot faster I then brought the

play15:02

footage over to After Effects and

play15:04

created a rough mask for the ground

play15:05

plane just by keying the grass and

play15:07

blurring the edges I also added blur and

play15:10

sharpen effects to match the iPhone's

play15:13

extreme compression and I highly

play15:16

recommend you don't try to integrate

play15:17

something into grass uh that was really

play15:19

annoying but I mean you get the idea I

play15:22

think the effect looks really cool and

play15:23

with some extra compositing it could

play15:25

look amazing another cool thing about

play15:27

this work is that we can very quickly

play15:29

try out different textures let's say we

play15:31

don't want this cozy looking farmhous

play15:33

but like a post-apocalyptic shed for

play15:35

this I can just go back into comu eye

play15:37

change the prompt generate a new image

play15:40

switch it out in blender rerender the

play15:43

sequence and then I can just drag and

play15:45

drop it onto the previous clip in After

play15:47

Effects and it's already integrated and

play15:50

this looks really good but let's say we

play15:52

want to add more detail we can simply do

play15:55

that by opening the line art in an image

play15:59

editor like Photoshop for example and

play16:01

just painting new lines on top of the

play16:03

image I can then just switch out the

play16:05

image in com VI eye and generate a new

play16:09

one and now the shed looks really really

play16:12

broken and of course we can also combine

play16:14

these two workflows so that we can add

play16:16

like 2D set extensions for the

play16:17

background and a 3D set set extension

play16:19

for the foreground let's go back to the

play16:21

shot for example where we added the

play16:23

skyline we added the skyline and it

play16:25

looks really good but let's say we want

play16:26

an office building in front of it a

play16:28

giant office tower so let's just add a

play16:31

few boxes to the scene export our render

play16:34

passes throw them into comfu eye add a

play16:36

prompt for like an office building

play16:38

generate a few images select one project

play16:41

it in blender render the sequence and

play16:44

throw it into After Effects create some

play16:45

rough masks for the foreground and it

play16:48

just looks really

play16:51

cool so I hope you're as excited as I am

play16:54

about this technique and I hope you try

play16:55

it out if you like these AI deep Dives

play16:57

want to support my work and gain access

play16:59

to exclusive example files like for

play17:01

example the blender files consider

play17:03

supporting me on patreon so thank you

play17:05

very much for watching I hope you will

play17:06

create something amazing with these

play17:07

workflows make sure to tag me in your

play17:09

work or share it with me I always love

play17:11

to see what you come up with and see you

play17:13

next time for the next AI workflow

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI workflows3D integrationset extensionsvisual effectsmovie editingfree toolsstable diffusionBlender trackingAfter Effectsdigital art