Easy Guide To Ultra-Realistic AI Images (With Flux)

Matt Wolfe
12 Aug 202413:12

Summary

TLDRThe video explores the impressive advancements in AI-generated images, particularly with Stable Diffusion 3, which creates hyper-realistic images that can easily blend in with social media feeds. It discusses the use of 'Aurora' models to enhance image quality and realism, and how combining these with platforms like f.aai and Runway ML can produce convincing AI-generated videos. The host shares their trials with different tools, providing insights into the process of generating and animating ultra-realistic AI characters.

Takeaways

  • 😲 AI-generated images have become incredibly realistic, making it difficult to distinguish them from real photos.
  • 🎨 The script discusses Stable Diffusion 3, a model known for creating hyper-realistic images.
  • πŸ€– Flux, a part of the AI model, is highlighted for its ability to produce images that look like casual snapshots rather than professionally composed photos.
  • πŸ–ΌοΈ The script mentions instances where AI-generated images have a 'plastic' or 'shiny' appearance, which can be a giveaway that they're not real.
  • πŸ” The use of 'Aurora', a low-rank adapter that enhances image quality, style, or character consistency, is explained as a way to improve AI-generated images.
  • 🌐 The script explores different platforms like Comfy UI, f.a, and Runway ML for generating and animating AI images.
  • πŸ’Έ There's a mention of the cost associated with using AI image generation services, with some platforms offering initial credits for new users.
  • πŸŽ₯ The process of animating AI-generated images to create realistic videos is discussed, including the use of Lum's Dream Machine and Runway ML's Gen 3.
  • πŸ“ˆ The script suggests that with the right settings and a bit of fine-tuning, it's possible to generate highly realistic AI images and videos.
  • πŸ”§ The video script serves as a tutorial for AI enthusiasts, providing insights into the latest tools and techniques for creating ultra-realistic AI content.

Q & A

  • What is the main topic discussed in the script?

    -The main topic discussed in the script is the advancement in AI-generated images, specifically focusing on the capabilities of Stable Diffusion 3 and the use of Aurora (AI model fine-tuning) to create highly realistic images and videos.

  • What is Stable Diffusion 3 known for?

    -Stable Diffusion 3 is known for generating highly realistic images that can be difficult to distinguish from real photographs, especially when scrolling through social media platforms like Instagram.

  • How does the speaker describe the quality of AI-generated images from Stable Diffusion 3?

    -The speaker describes the AI-generated images from Stable Diffusion 3 as 'really good' and 'phenomenal,' noting that they are so realistic that they could easily be mistaken for real photographs taken by a person.

  • What is Aurora and how does it enhance AI-generated images?

    -Aurora, also known as a low-rank adapter, is a tool that can be thought of as a filter or plugin used to fine-tune AI-generated images. It allows for targeted improvements in image quality, style specificity, or character consistency without the need for extensive computational power or complete retraining of the foundational AI model.

  • What is the purpose of using Aurora in combination with Stable Diffusion 3?

    -The purpose of using Aurora in combination with Stable Diffusion 3 is to enhance the realism of the generated images by adding extra information that improves skin, hair, and wrinkle details, making the images look more lifelike.

  • How does the speaker attempt to recreate the ultra-realistic AI-generated images?

    -The speaker attempts to recreate ultra-realistic AI-generated images by using the flux realism Aurora model on the f.aai site, adjusting the guidance scale, and then using Runway ML to animate the generated images.

  • What challenges does the speaker face when generating realistic AI images with flux inside of Glyph?

    -The speaker faces challenges such as the images having a 'plastic shininess' to the skin and not looking as realistic as those generated by others using Aurora. The speaker also notes the lack of options to add Aurora within the Glyph workflow builder.

  • What is the significance of the 'guidance scale' in the AI image generation process?

    -The 'guidance scale' is significant in the AI image generation process as it determines the level of detail and realism in the output. The speaker found that setting the guidance scale to two produced more realistic results compared to the default setting.

  • How does the speaker evaluate the quality of the AI-generated videos?

    -The speaker evaluates the quality of the AI-generated videos by looking at the realism of the movements and the consistency of the generated images, noting issues like the floating microphone and the unnatural stillness of the microphone in the animations.

  • What are the two main tools the speaker uses to animate AI-generated images?

    -The two main tools the speaker uses to animate AI-generated images are Runway ML and Lum's Dream Machine.

  • What conclusion does the speaker draw about the current state of AI-generated videos?

    -The speaker concludes that while the AI-generated videos are impressive, they may require multiple attempts or 'rerolls' to achieve the highest level of realism, and that some of the ultra-realistic videos circulating might be cherry-picked for their quality.

Outlines

00:00

πŸ€– AI's Leap in Realistic Image Generation

The speaker expresses amazement at the recent advancements in AI-generated images, particularly those from Stable Diffusion 3. They note that these images are so realistic that they could easily be mistaken for photographs on social media platforms like Instagram. The speaker highlights the imperfections in the images, such as off-center compositions, which paradoxically contribute to their authenticity. They mention that while some images still have issues with body proportions, these can be resolved with a few attempts. The speaker also discusses the use of additional tools like Aurora to enhance image quality, turning AI-generated images into highly realistic videos.

05:02

🎨 Enhancing AI Image Realism with Aurora

The speaker delves into the use of Aurora, a low-rank adapter that acts as a filter or plugin to improve the realism of AI-generated images. They explain that Aurora allows for targeted improvements in image quality, style, or character consistency without extensive computational power or retraining of the AI model. The speaker contrasts the results from using Aurora with those from the foundational model alone, noting the significant difference in realism. They also discuss the limitations of using Aurora within certain platforms like Glyph and suggest alternative methods to utilize Aurora, such as through Comfy UI or cloud-based services like f.aai.

10:04

πŸ“Ή Animating AI-Generated Humans for Realistic Videos

The speaker explores the process of animating AI-generated images to create ultra-realistic videos. They demonstrate the use of Runway ML's Gen 3 Alpha to animate an AI-generated image, highlighting the challenges in achieving a realistic result, such as the unnatural movement of objects like a microphone. The speaker also compares the animation results from Runway with those from Lum's Dream Machine, finding the former to produce more convincing animations. They conclude by summarizing the steps to create realistic AI-generated videos, emphasizing the potential for further refinement and the excitement of these new tools for AI enthusiasts.

Mindmap

Keywords

πŸ’‘AI Generated Images

AI Generated Images refer to visual content created using artificial intelligence algorithms, particularly in the context of the video, through a model known as 'stable diffusion 3'. These images are designed to mimic real-life photographs, often to the point where they are indistinguishable from human-created content when viewed casually on platforms like Instagram. The video discusses the impressive advancements in AI image generation, noting how they can now produce highly realistic and detailed outputs.

πŸ’‘Stable Diffusion 3

Stable Diffusion 3 is an AI model mentioned in the script that is responsible for generating the AI images discussed. It is highlighted for its ability to create images that are incredibly lifelike, to the extent that they can deceive viewers into thinking they are real photographs. The video script uses examples of images generated by Stable Diffusion 3 to illustrate the current state of AI-generated content.

πŸ’‘Realism

Realism, in the context of the video, pertains to the quality of AI-generated images that makes them appear as if they could have been captured by a human photographer. The video emphasizes the increasing realism of AI images, noting how they can now include imperfections that add to their authenticity, such as being slightly off-centered or having a 'plastic shininess' to the skin.

πŸ’‘Flux

Flux is an AI model used for generating images, as mentioned in the script. It is described as being 'absolutely insane at creating super realistic images'. The video discusses how Flux has been used to create images that are so realistic they are difficult to distinguish from real photographs, showcasing the capabilities of modern AI in the field of image generation.

πŸ’‘Aurora

Aurora, or 'Aur', is referred to as a 'low-rank adapter' in the video, which can be thought of as a filter or plugin that enhances the capabilities of the base AI model. It is used to fine-tune the image generation process, allowing for improvements in style, character consistency, or overall image quality without needing to retrain the foundational model. The script provides examples of how Aurora can be used to make AI-generated images look more realistic, particularly in terms of skin, hair, and wrinkles.

πŸ’‘Comfy UI

Comfy UI is a user interface for AI workflows that allows for complex configurations and is mentioned in the video as a way to use Aurora with Flux. It is described as having a complex and intricate design, which can be overwhelming for many users. The video script suggests that Comfy UI is a more advanced tool for those who want to fine-tune their AI-generated images beyond the capabilities of simpler interfaces.

πŸ’‘Runway ML

Runway ML is a platform for creating machine learning models, including the ability to animate AI-generated images, as discussed in the video. The script describes using Runway ML to take a still image generated by Flux and create a video of it, demonstrating how AI can be used to produce moving, talking images that appear realistic.

πŸ’‘Lum's Dream Machine

Lum's Dream Machine is another tool mentioned in the video for animating AI-generated images. The script compares the results from Lum's Dream Machine with those from Runway ML, noting that while Lum's Dream Machine can produce animated images, the results may not be as polished or realistic as those from Runway ML.

πŸ’‘F.aas.ai

F.aas.ai is a service mentioned in the video that allows users to run AI models using cloud computing resources. It is highlighted as a platform where users can access and utilize the Flux model, including the Flux Realism Aurora, to generate more realistic AI images. The video script explains how this service provides an easy way to experiment with AI image generation without the need for extensive computational power.

πŸ’‘Inference

Inference, in the context of AI, refers to the process of using a trained model to make predictions or generate outputs from new input data. The video script mentions 'inference steps' and 'guidance scale' as parameters that can be adjusted when using AI models like Flux on platforms like f.aas.ai to influence the quality and realism of the generated images.

Highlights

AI-generated images have become incredibly realistic, making it difficult to distinguish them from real photos.

Images from Stable Diffusion 3 are setting new standards for realism in AI-generated art.

Flux AI model is praised for creating highly realistic images that mimic snapshots from a phone.

The imperfections in AI-generated images, such as off-center compositions, contribute to their realistic appearance.

Some AI-generated images can have wonky proportions, especially when depicting full body shots.

Reddit users have been pushing the boundaries of AI image realism with tools like Flux and Aurora.

Aurora, a low-rank adapter, is used to fine-tune AI models for specific styles or character details.

Aurora models enhance image quality by improving details like skin texture, hair, and wrinkles.

The use of Aurora in combination with Flux allows for the creation of ultra-realistic AI-generated images.

Glyph app workflow builder allows users to utilize the Flux Pro version for free, but lacks Aurora integration.

Comfy UI offers complex workflows for fine-tuning AI-generated images with Aurora, but can be overwhelming for beginners.

Falling AI (f.aai) is a cloud-based service that allows users to run AI models like Flux and Aurora.

Falling AI provides a free credit for new users to experiment with AI model generation.

The Flux Realism Aurora model on Falling AI enhances the realism of generated images significantly.

Adjusting the guidance scale in Falling AI is crucial for achieving the desired level of realism.

Runway ML's Gen 3 is used to animate AI-generated images, creating ultra-realistic videos.

Lum's Dream Machine is another tool for animating AI-generated images, though results may vary.

The video concludes with a summary of the easiest methods to achieve ultra-realistic AI-generated images and videos.

Transcripts

play00:00

man these AI generated images have been

play00:02

really good lately I mean check out what

play00:04

we're getting we got this one I mean

play00:06

this one is just phenomenal check that

play00:08

out like if you were just scrolling on

play00:10

Instagram you would never know that that

play00:12

was AI generated we've got this work of

play00:14

art I

play00:15

mean fantastic I cannot find anything

play00:19

wrong with this image at all no no I'm

play00:22

just screwing with you these are all

play00:23

from stable diffusion 3 which from this

play00:26

point on will probably always be known

play00:28

for images like this and things like

play00:31

this but truly for reals AI images have

play00:34

gotten really really good I talked about

play00:37

this in a recent news video about how

play00:39

flux came out and people were figuring

play00:41

out how to make flux even more and more

play00:44

realistic we got images like this and

play00:47

like this one and this one here and

play00:49

here's another one I mean as you can see

play00:52

it's getting harder and harder to tell

play00:54

when an image was generated with AI

play00:56

these are all flux and flux is just

play00:59

absolutely insane at creating super

play01:01

realistic images I think the fact that

play01:03

they're not like perfectly composed like

play01:06

they don't look like a professional

play01:07

photographer took them is sort of what

play01:10

gives them that feeling of all right

play01:12

this looks like just a random snapshot

play01:15

that somebody took it looks real like if

play01:17

you were just scrolling Instagram and

play01:18

you saw this without looking super super

play01:21

closely you probably wouldn't know that

play01:23

was AI I mean look how it's sort of

play01:25

off-centered like someone taking a quick

play01:27

iPhone picture would probably do now

play01:30

these are all really really good there

play01:32

are a few exceptions of when it starts

play01:35

to get really kind of wonky and that's

play01:37

when you try to get more of the body in

play01:39

the shot then the proportions start to

play01:41

look a little bit off but even that

play01:43

often just takes a few rerolls and you

play01:45

get something that looks decent now I

play01:47

didn't generate any of these by the way

play01:49

these are all ones that I found on

play01:50

Reddit but then people on X started

play01:52

taking this to another level and taking

play01:54

these realistic looking images and

play01:57

animating them and making them into

play01:59

realistic looking videos you see a video

play02:01

like this and you know there's no sound

play02:04

to it but you just scroll past this this

play02:06

does not look AI generated to me here's

play02:08

another one of an AI generated woman

play02:10

talking on stage and then the same like

play02:12

ponytail dude over on the right and once

play02:15

again this was generated with flux and

play02:17

then it looks like they used lum's dream

play02:20

machine to take that image and turn it

play02:22

into a video here's another video that I

play02:24

came across that looks like somebody at

play02:26

a TED Talk and their paper actually has

play02:28

the date on it now I kept coming across

play02:30

a lot of this stuff and I was having a

play02:33

really hard time getting the images that

play02:35

I generated with flux to look ultra

play02:37

realistic I was using this glyph doapp

play02:40

workflow Builder here because it

play02:42

actually lets me use the glyph pro

play02:44

version for free but when I run a prompt

play02:46

through the glyph version of flux I'd

play02:49

get images like this which to be honest

play02:52

is actually really really good really

play02:54

realistic but it's got this like almost

play02:57

plastic shininess to the skin that we

play03:00

weren't getting in some of those other

play03:01

images here's another output that tried

play03:03

to generate that same guy and once again

play03:06

it is pretty dang realistic but it

play03:08

doesn't look like this quality here

play03:11

here's the one that I just made here's

play03:13

the quality of the one that was shown

play03:15

off on Reddit this one hard to tell that

play03:18

it's fake this one I mean just the

play03:21

colors of it look off that Skin's got

play03:24

like a little plasticky feel to it this

play03:26

to me looks like an AI image I don't

play03:28

know if I've just seen so many now that

play03:30

I'm better at spotting them but there's

play03:32

definitely a quality difference between

play03:33

this and this the one that I generated

play03:36

here came straight out of glyph with no

play03:39

extra filters or anything special

play03:41

running from it it's just the prompt

play03:43

into glyph giving me this image the ones

play03:45

in this example on the other hand they

play03:47

used what's called Aur now aora is a low

play03:51

rank adapter you can think of it as

play03:53

almost like a filter or Plugin on top of

play03:56

the normal image generation so flux is

play03:58

the foundational image model which

play04:00

generates the image the Laura is like

play04:03

some extra sort of fine-tuning

play04:05

information on top of that training

play04:08

here's how perplexity explains allora

play04:10

allora is used to train the model on

play04:12

specific Concepts Styles or characters

play04:15

allowing for targeted improvements in

play04:16

image quality style specificity or

play04:19

character consistency Laura models are

play04:21

typically small in size ranging from 2

play04:23

to 500 megabytes and can be easily

play04:24

integrated into existing models to

play04:26

enhance their performance it allows them

play04:28

to customize their AI mod models to

play04:30

produce unique art styles or improved

play04:31

image quality without requiring

play04:33

extensive computational power or a

play04:35

complete retraining of the model so some

play04:37

examples they gave Here style

play04:38

specialization training the model to

play04:40

generate image in a specific style such

play04:42

as anime or oil painting character

play04:44

specialization training the model to

play04:46

generate images of specific characters

play04:48

such as Mario or SpongeBob or quality

play04:50

improvements enhancing the overall

play04:51

quality of the generated image such as

play04:53

improving the detail or Texture so

play04:55

somebody basically trained one of these

play04:57

lauras which works in combination with

play04:59

flux without needing to retrain flux

play05:02

entirely it can just add the additional

play05:04

information that's needed to get to the

play05:06

desired result that they're going for so

play05:09

with this example here they used Aur

play05:11

from Excel lab which apparently affects

play05:14

the skin the hair and the wrinkles to

play05:17

make the images look more realistic same

play05:19

thing with this image here it was using

play05:21

the same Laura to get this sort of extra

play05:24

realism out of the image however when

play05:26

I'm using flux inside of glyph I can

play05:29

show you when I look at my actual glyph

play05:32

workflow here there is no special

play05:34

add-ons here there is no luras happening

play05:37

even under Advanced controls we don't

play05:39

even have the option to add luras in and

play05:42

even if I click add block there's no

play05:44

options to work with luras within here

play05:47

as well something that I imagined glyph

play05:49

will probably add in in the future if I

play05:50

had to guess but right now we don't get

play05:52

that option we get what comes straight

play05:54

out of the flux foundational model

play05:57

without the benefit of using that EXT ex

play05:59

ra realism Laura that the people on

play06:01

Reddit were using now one option to be

play06:03

able to use the luras would be to use

play06:05

something like comfy UI you've probably

play06:07

seen a few of these comfy UI workflows

play06:09

they look like kind of spaghetti bowls

play06:11

with these lines going everywhere they

play06:13

get complicated really really quickly

play06:15

and are over the head of most people I

play06:17

even struggle to wrap my head around

play06:19

them once they start to get more complex

play06:21

than you know three or four blocks the

play06:23

other way to use this Laura would be to

play06:25

use a site like

play06:27

f. now this is a service similar to

play06:29

replicate or like what you'd get on

play06:31

hugging face spaces where you can

play06:33

actually run AI models but you're using

play06:35

their Cloud to run them you're using

play06:37

like theile AI Cloud to run the

play06:40

inference to run the processing on these

play06:42

AI images now they have the standard

play06:45

flux one pro model here so if you want

play06:47

to just use flux Pro you can use it but

play06:50

we're going to run into those issues

play06:51

where if I want a super realistic image

play06:53

it's not going to look as realistic as

play06:55

what we're seeing because it doesn't

play06:57

have that additional Laura information

play06:59

on it however somebody did add the flux

play07:02

realism Laura inside of f. a or foul.

play07:06

here now one thing to note when you do

play07:08

first start using this site this fall. a

play07:11

it's not free to run the inference and

play07:14

to use their Cloud computers it costs a

play07:17

few cents when you do it so every time I

play07:20

run flux over here on this website it

play07:23

cost 32 cents or for about $1 you can

play07:27

run it 29 times here's the thing though

play07:29

when you first sign up as of right now

play07:31

as of the recording of this video they

play07:33

actually give you $2 worth of credit so

play07:36

that you can get in here and play around

play07:37

with this so if you do want to play with

play07:38

it yourself you've got a couple bucks

play07:40

you know 60 70ish Generations before you

play07:43

need to start spending out of your own

play07:45

pocket but once you're logged into this

play07:47

fall. site you can go to fall. a/m

play07:51

models and see all the various models

play07:53

that are available to use here and at

play07:55

the time of this recording flux one Dev

play07:58

and flux real ISM are both towards the

play08:01

top for you to test out and use right

play08:03

now you can also use the flux one pro

play08:06

which I believe is slightly more

play08:08

expensive yeah so it's 5 cents per

play08:10

generation typically but let's go ahead

play08:12

and use the flux realism Laura here I've

play08:15

already got the prompt plugged in here

play08:17

that gets us a similar image to the dude

play08:19

with the ponytail and under additional

play08:21

settings one thing I did notice is that

play08:24

you can leave the number of inference

play08:25

steps at 28 here but if you leave the

play08:28

guidance scale at the default 3.5 it

play08:30

actually doesn't look that great it

play08:33

starts to look shiny and plasticky and

play08:36

and more unrealistic I found the sweet

play08:38

spot to be about two I would get in here

play08:41

and play with it if you don't get an

play08:42

image that looks like what you want you

play08:43

can play with this CFG scale here but

play08:46

two was The Sweet Spot for the realism

play08:48

for me I go higher than that and it

play08:50

starts to look a little bit more fake so

play08:52

let's go ahead and change that and click

play08:54

run and we get images like this which

play08:56

are pretty dang realistic looking the

play08:58

forehead is still a little bit shinier

play09:01

than I'd like but it's pretty dang good

play09:03

the next thing I wanted to do was I

play09:05

wanted to animate them like we saw in

play09:07

the other videos that were circulating

play09:09

all over X so let's go ahead and

play09:11

download this image and we'll jump over

play09:12

to Runway ml.com and I'll animate it

play09:15

with Gen 3 so let's go ahead and click

play09:17

started on gen 3 Alpha I can grab my

play09:19

image here and drag it in the one that

play09:21

we just created it wants me to crop it

play09:23

so I'll go ahead and crop it like that

play09:24

so his whole head is in the picture and

play09:26

then I'm just going to grab the exact

play09:28

same prompt that I had originally paste

play09:30

it in here it's a little bit too long it

play09:32

goes past their 500 here so I'm going to

play09:35

get rid of the last sentence let's go

play09:37

ahead and generate this see what it

play09:38

gives us and here was my first attempt

play09:40

with the video oh the mag the microphone

play09:43

did a little magic trick there where it

play09:45

just floats in midair if that didn't

play09:47

just happen it's actually pretty decent

play09:50

looking I mean it's a good video oh oh

play09:52

right until that moment right there I

play09:54

think it's a pretty solid video and then

play09:56

after the floating microphone incident

play09:58

it actually still looks pretty dang

play09:59

solid and realistic as well I generated

play10:01

one more time because this first video

play10:03

that I made I accidentally had it set as

play10:05

the last frame being this image that's

play10:07

why the video sort of starts zoomed out

play10:10

and then moves all the way to finishing

play10:13

on the frame that we had here that was a

play10:15

mistake I meant to set it as the first

play10:17

frame so I generated it again where this

play10:19

is now the first frame and here's the

play10:22

version we get out of that this one is a

play10:25

lot more realistic the microphone is

play10:28

kind of a little too still for my taste

play10:31

right he's moving a little too much and

play10:33

that microphone is just like solidly

play10:36

held there no matter how much he moves

play10:37

his head or mouth around which just like

play10:39

looks off to me it's pretty good I mean

play10:42

the fingers get a little wonky here but

play10:44

this is really how they're making those

play10:46

videos that you're seeing all over X

play10:48

right now of ultra realistic but not

play10:51

real people videos the other way to do

play10:53

it outside of using Runway would be to

play10:55

use lum's dream machine so I went ahead

play10:57

and plugged in my image into lum's dream

play11:00

machine here you H the exact same prompt

play11:03

and I mean the results are not quite as

play11:07

good as what we got out a Runway you can

play11:09

see the face sort of goes wonky towards

play11:12

the end like what oh my God what just

play11:13

happened there so not great Runway to me

play11:16

did a lot better job unfortunately the

play11:19

way AI is right now I'm fairly certain

play11:21

that most of the videos that you're

play11:23

seeing on X where somebody took a ultra

play11:25

realistic AI generated human and made a

play11:30

video of that person speaking it was

play11:32

probably a little cherry-picked they

play11:33

probably had to do a few rerolls like if

play11:36

I generated two or three more times I

play11:38

bet one of the videos that came out of

play11:40

it would be just totally perfect and

play11:42

hard to tell that it was AI generated

play11:44

but I wanted to make a quick fun

play11:46

breakdown and see if I can recreate some

play11:49

of what I saw there are ways to get it

play11:51

way more dialed in right if you want to

play11:53

use one of those comfy UI workflows you

play11:55

can get way more dialed in using

play11:57

something like that but I wanted to to

play11:59

try to figure out the sort of quickest

play12:02

path from A to Z here the easiest way

play12:04

that I've found to do it so far is to go

play12:06

to this file. aai site use the flux

play12:09

realism Laura model make sure you have

play12:11

your guidance scale set to two but if

play12:13

you don't get what you're looking for

play12:14

play with that one a little bit use that

play12:16

slider until you get something that

play12:18

looks the way you want it then you can

play12:19

take the image that it generates pull it

play12:22

into Runway and now you're getting that

play12:24

ultra realistic video mine don't look as

play12:27

good as the examples at the beginning

play12:28

but again I really really think some of

play12:30

that stuff was cherry-picked just some

play12:32

fun toys for us AI nerds to play with

play12:34

hopefully you learned about some new

play12:36

tools that maybe you hadn't heard of yet

play12:38

this fall. a site is actually a new one

play12:40

to me uh my buddy angry penguin who was

play12:42

on that flux video with me last week is

play12:45

the one who pointed this site out to me

play12:47

and now I've been playing with it and

play12:48

using this flux realism Laura to really

play12:51

dial in these realistic AI generated

play12:53

images that are coming out pretty dang

play12:56

solid so anyway just thought I'd shoot a

play12:58

fun video nerd out with you for a few

play12:59

minutes today and show you what I've

play13:01

been playing around with hope you

play13:03

enjoyed it if you did like this video

play13:04

subscribe to this channel more videos

play13:06

like this will show up in your feed and

play13:08

uh that's it really appreciate you see

play13:10

you in the next one bye-bye

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI ImagesFlux AIRealismAurora ModelImage GenerationVideo AnimationArtificial IntelligenceTech TrendsDigital ArtInnovation