AI Video Is About To Explode!

Theoretically Media
25 Jan 202416:07

Summary

TLDRThe video discusses several AI video developments including Google's new Lumiere video model which generates videos in a more temporally coherent way, speculation that fully AI-generated 30-60 minute films may be available within months, a sponsored segment on Meshi which is a text-to-3D generator that can create models which can then be animated, some interesting hints that Midjourney may be moving into more 3D capabilities possibly based on the Media Molecule game creation tool Dreams, and coverage of Sim Francisco, an AI simulation of an entire city populated by AI agents with wants, needs and lifespans aimed at developing artificial general intelligence.

Takeaways

  • 👨‍💻 Google announced a new video AI model called Lumiere that uses a spacetime diffusion architecture for improved coherence
  • 🎥 YouTube's Matt Wolf predicts 30-60 minute AI-generated films that are coherent and enjoyable will be available within the next 2 months
  • 🖥️ Meshi is a free AI text-to-3D model generator that can help you get started with creating 3D assets even if you're not an expert
  • 🤯 Nick Beller created an awesome mixed reality animation by sculpting a 3D character in VR and compositing it into video footage
  • 👀 Ex-Media Molecule co-founder Alex Evans is now working at Midjourney as a principal research engineer, possibly on 3D capabilities
  • 🔮 Midjourney is building hardware focused on managing tens of thousands of virtual 3D spaces
  • 🚀 AI simulation company Anthropic created an AI city called S. Francisco populated by agents with wants, needs and lifespans
  • 🕹️ The AI agents in S. Francisco relax by playing old school Nintendo games
  • 🧠 Anthropic's goal with S. Francisco is to achieve artificial general intelligence
  • 📺 The Simpsons predicted AI cities would be created according to the video narrator

Q & A

  • What new video model did Google recently announce?

    -Google announced Lumiere, a new video generation model that uses a spacetime diffusion model to generate videos all at once instead of frame by frame.

  • What are some of the capabilities of Lumiere?

    -Lumiere can generate videos from text prompts with coherent motions like walk cycles. It can also stylize videos, do video in-painting and out-painting, and generate videos from images.

  • What did YouTube's Matt Wolf recently predict about AI-generated films?

    -Matt Wolf predicted that 30-60 minute AI-generated films that are coherent and enjoyable will be available within the next 2 months.

  • What does the tool Meshy allow you to do?

    -Meshy is an AI text-to-3D model generator that allows you to generate 3D models and textures from text prompts across different art styles.

  • How can you animate models made in Meshy?

    -You can download Meshy models and bring them into Adobe Mixamo to auto-rig and animate them.

  • Who is now working at Midjourney and what might they be doing?

    -Alex Evans, co-founder of Media Molecule, is now at Midjourney. He may be working on 3D capabilities for Midjourney.

  • What is Sim Francisco?

    -Sim Francisco is an AI-generated city populated with AI agents that have wants, needs, and lifespans. It's an experiment in artificial general intelligence.

  • What graphical style is Sim Francisco designed in?

    -Sim Francisco uses a graphical style similar to the South Park animation style.

  • What are some interactive elements in Sim Francisco?

    -The AI agents in Sim Francisco can fall in love, play NES games, and have lifespans where they die.

  • What is the ultimate goal of Sim Francisco?

    -The goal of Sim Francisco is to achieve artificial general intelligence by simulating agents in a virtual city.

Outlines

00:00

🎥 Google's New Lumiere Video Model Generates High-Quality Videos

Google has announced Lumiere, a new video generation model that uses a Spacetime diffusion model to generate high-quality, coherent videos all at once instead of frame by frame. It performs well at text-to-video, image-to-video, stylized video generation, in-painting, and out-painting.

05:00

🎞️ AI-Generated 30-60 Minute Films Coming in the Next 2 Months

YouTube's Matt Wolf has seen AI-generated 30-60 minute coherent and enjoyable films that will be publicly available in the next 2 months. He cannot share details due to an NDA but says the tech will be available for everyone soon.

10:00

📹 Meshi - An AI Text-to-3D Model Generator That's Easy to Use

Meshi is a free AI text-to-3D model generator with 200 monthly credits. It offers various art styles and can generate 3D models from text prompts quickly. The models can be downloaded and used in other 3D software. A coupon code for 20% off is provided.

15:01

🌆 AI Simulates a Dynamic City (San Francisco) With Virtual Agents

The Simulation has created an AI-generated city called San Francisco populated by virtual agents with wants, needs and lifespans. It looks visually similar to South Park but focuses on simulating urban dynamics and interactions to study artificial general intelligence.

Mindmap

Keywords

💡video generation

Video generation refers to AI systems that can generate synthetic video content from text prompts. This is a key theme in the video, as it discusses several new video generation models from Google like Video Poet and Lumiere. The narrator analyzes their capabilities in generating coherent videos like the astronaut walking on Mars. Video generation models aim to create novel video content automatically using AI.

💡diffusion models

Diffusion models are a type of generative AI technique used for image and video synthesis. Lumiere uses a spacetime diffusion model which generates the entire video at once rather than frame-by-frame. This results in greater temporal coherence. The narrator explains how this spacetime architecture differs from other models that can result in 'temporal breaks'.

💡feature-length films

The narrator discusses the potential for AI to create full-length 30-60 minute films this year that are coherent and enjoyable to watch. This is based on comments from YouTube creator Matt Wolf and Marvel director Joe Russo. It reflects growing capabilities of AI creativity and storytelling.

💡3D generation

The video looks at AI systems that can generate 3D models like Meshi. The narrator recommends learning 3D principles as AI creativity moves more into 3D. Examples given include generating 3D characters and objects and bringing them into animation software.

💡virtual worlds

The video references 'virtual 3D spaces' that Midjourney is building, hinting at more interactive and immersive experiences. The narrator speculates this could be similar to software like Dreams that allows exploring imaginative 3D worlds.

💡simulation

Simulation refers to using AI agents in simulated environments to study emergence and human behavior. The video discusses 'Sim Francisco', a virtual city populated with AI agents living life. This aims to achieve general intelligence through simulation.

💡text-to-video

Text-to-video is the task of generating video from text descriptions. The narrator shows examples of text-to-video from Lumiere like 'astronaut on planet Mars' and 'woman walking through autumn forest'. This tests the coherence of generated video.

💡text-to-3D

Similarly, text-to-3D involves generating 3D models and objects from text prompts. The sponsor Meshi provides text-to-3D capabilities for generating characters, vehicles etc. The video shows examples like 'spaceship' and 'baby dragon'.

💡video stylization

Video stylization is the technique of transferring artistic styles onto video content. Lumiere can stylize input video into different rendering styles like painting, sketch, stained glass etc. as demonstrated through examples in the video.

💡deepfakes

Though not directly referenced, many AI video generation techniques can enable manipulative deepfakes. The narrator positively highlights creative potentials but cautions on misuse remain.

Highlights

Google has a new video model called Lumiere.

Lumiere uses a spacetime diffusion model to generate video all at once.

Matt Wolf predicts 30-60 minute coherent AI films will be available in the next 2 months.

Meshi is an AI text-to-3D model generator that is easy to use for beginners.

Meshi works well for generating cartoon and animate styles but struggles with ultra-realistic models.

Generated Meshi models can be textured with AI and animated in Mixamo.

A Media Molecule co-founder is now working at Midjourney, possibly on 3D features.

Midjourney is building hardware focused on collecting data for 3D.

Simulation created an AI city called S Francisco with autonomous AI agents.

S Francisco agents have lifespans, fall in love, and play Nintendo games.

S Francisco's visual style resembles previous Simulation South Park episode.

S Francisco runs continuously to achieve AGI like a hyper Sims game.

AR character sculpted in Dreams and composited onto video for cool effect.

Lumiere shows promise for coherent video stylization and image-to-video.

Upcoming full AI films predicted within Joe Russo's 2 year timeline.

Transcripts

play00:00

hey everyone well we've got some pretty

play00:01

big Bombshells in the world of AI video

play00:04

namely Google has yet another new video

play00:06

model and 30 to 60 Minute consistent and

play00:10

enjoyable AI films coming soon we'll

play00:13

talk plus we're going to take a look at

play00:15

a tool that is going to be really

play00:16

helpful if you are terrible at 3D like I

play00:19

am I mean obviously if you're good at 3D

play00:20

I think that this tool will be helpful

play00:22

as well and I've got an interesting mid

play00:24

Journey dive well maybe less of a dive

play00:26

and more of a splunking into an area

play00:28

that I don't think anybody else has

play00:30

covered yet and just might show the

play00:31

direction that mid Journey's future is

play00:33

headed in all that plus an AI city of

play00:36

the future that is already here all

play00:38

right let's dive in kicking off Google

play00:41

have announced Lumiere a new video

play00:43

generation model now you might be

play00:44

thinking to yourself didn't Google just

play00:47

do this like a month ago and yes yes

play00:49

they did about 3 weeks ago Google

play00:51

dropped video poet a zero shot video

play00:54

generator does anyone know what is

play00:56

happening at Google right now does

play00:57

anyone at Google know what's happening

play00:59

at Google right now well anyhow as of

play01:01

today we have Lumiere which does

play01:04

something kind of different than

play01:06

previous video generators namely This Is

play01:08

A Spacetime diffusion model so we are

play01:11

going to rip a hole into the SpaceTime

play01:12

Continuum and see what makes Lumiere

play01:15

different but first let's take a look at

play01:16

sort of like the standard features uh

play01:18

text to video for example it does a

play01:20

really great job with walk Cycles like

play01:22

this is uh astronaut on planet Mars he

play01:24

doesn't look like he's like Moon walking

play01:26

or sliding all over the planet handheld

play01:28

shot of a woman walking through an

play01:30

Autumn forest and an adorable puppy that

play01:32

isn't morphing into something horrific

play01:34

two other shots that I thought were

play01:35

pretty exceptional were the Jack Russell

play01:37

Terrier on a snowboard they actually

play01:39

called out GoPro shot in this and it

play01:41

definitely does have that characteristic

play01:43

GoPro fisheye look the Lamborghini is

play01:46

also really nice there are some

play01:47

incoherencies in terms of the road and

play01:49

maybe the physics of the car turning but

play01:52

the fact that it's actually holding the

play01:53

model of the car together and not like

play01:55

morphing into a fire truck is impressive

play01:58

image to video also looks looks very

play02:00

good uh the characteristics and the

play02:02

smile on the girl with the Pearl Earring

play02:04

looks very naturalistic and I mean those

play02:07

are five fingers on Sir Isaac Newton as

play02:09

he's waving hello so funny enough they

play02:11

also animated the famous flag over eima

play02:14

shot uh which Google poet also did but

play02:17

on the video poet side we kind of ended

play02:18

up with like this weird gopher kind of

play02:20

appearing out of nowhere it also has

play02:22

this stylized generation which is

play02:24

something that I don't think I've seen

play02:26

before this wherein you can give the

play02:28

model a reference image and then it will

play02:30

generate videos in the style of that

play02:32

reference image for example here we have

play02:34

like this vector image and if you prompt

play02:36

a bare twirling with delight you get

play02:38

this and a cute bunny nibbling on a

play02:41

carrot so it's definitely taking the

play02:43

elements of that you know Vector

play02:44

illustrat kind of style and applying it

play02:47

over to the video model we've also got

play02:49

video stylization kind of that gen one

play02:52

thing where you know you can take an

play02:53

input video and change it into a variety

play02:56

of styles at first I thought that it was

play02:57

actually completely segmenting out the

play02:59

back ground but actually in further

play03:01

reflection it does look like it does

play03:03

change out the backgrounds for each one

play03:06

of these like in the made of flowers we

play03:08

have kind of like the Eiffel Tower in a

play03:10

bouquet look back there made of flowers

play03:12

for some reason is also super disturbing

play03:14

to me you would think like made of

play03:15

flowers it's that's nice and pleasant

play03:17

but it actually comes off as like like

play03:18

one step away from the corep virus in

play03:20

The Last of Us it also does video in

play03:23

painting and out painting and it looks

play03:24

like it does that pretty well

play03:26

considering you've got like this balloon

play03:27

here with the mask on this side and it's

play03:30

completely making up you know these

play03:31

other balloons and the remainder of the

play03:34

sky and Horizon the pizza example is

play03:37

actually really impressive as well

play03:38

considering that it has to generate not

play03:40

only the top half of that pizza but the

play03:42

hand Crossing into the Mast area and

play03:45

dropping basil onto it um yeah that's

play03:49

pretty cool so how is Lumiere different

play03:51

from other video models and remember

play03:53

this is my caveman brain reading the big

play03:55

words in the paper so this is going to

play03:56

be really simplified basically it all

play03:58

comes down to this archit iture called

play04:00

SpaceTime unit which allows for the

play04:03

video to be created all at once as

play04:05

opposed to I guess other models which

play04:07

begin with an input frame and then have

play04:10

an output frame and then generate key

play04:12

frames between those the problem with

play04:14

other video models as you can see in

play04:17

this example is like you have your input

play04:18

frame here and your output frame here

play04:20

and then if you look down here there's

play04:21

kind of this this temporal break in the

play04:24

chain right here whereas with the

play04:26

SpaceTime unet um you know we have

play04:29

unbroken chain also because the video is

play04:32

generated all at once in SpaceTime as

play04:34

opposed to you know creating individual

play04:37

frames from an in and an out it frees

play04:39

the model up for a lot of other tasks

play04:41

like video stylization in andout

play04:44

painting and image to video so all of

play04:47

those will be a lot more temporally

play04:49

coherent as well so it all looks really

play04:51

cool and super promising and of course

play04:53

that leads to the question do we get to

play04:55

play with it the answer is I don't know

play04:57

I don't know if they release another

play04:58

video model in like 3 weeks I'm just

play05:00

hoping that they name it something like

play05:01

on the nose like Kafka moving on are we

play05:04

going to be seeing 32 60 Minute long

play05:07

completely AI generated films sometime

play05:09

this year or even trimming that down

play05:12

sometime in the next 2 months well in a

play05:14

recent tweet by YouTube's own Matt wolf

play05:16

uh congratulations Matt you have gone

play05:18

from reporting on the news to becoming

play05:19

the news anyways Matt says yes the Tweet

play05:22

reads in my 2024 predictions video that

play05:25

video will be linked below I made a

play05:27

comment that I don't think AI is going

play05:29

to be creating fulllength shows and

play05:31

films this year 30 to 60 Minute stories

play05:34

that were coherent and enjoyable won't

play05:36

be available this year I was wrong the

play05:39

next two months will be wild winky face

play05:42

now Matt can't go into a lot of details

play05:43

on this due to the fact that he signed a

play05:45

non-disclosure agreement but he was able

play05:47

to field a few questions for example

play05:49

when Dan the Man asked if this was going

play05:52

to be stock footage type mashups uh

play05:54

something that we saw with in video but

play05:57

no Matt says he's talking about fully

play05:59

generated videos additionally this

play06:01

technology will be available for

play06:02

everyone to use it's not going to be

play06:04

locked behind some like private

play06:06

firewalled gate in some you know Studio

play06:08

mogul's house and lastly in terms of a

play06:10

time frame as to when we will all see

play06:12

this Matt says that he thinks that next

play06:15

month people will see what he saw now I

play06:18

can't comment too much on Matt's post

play06:20

namely because I may or may not have

play06:22

signed a very similar non-disclosure

play06:24

agreement uh that said now would be a

play06:27

good time to hit the Subscribe button

play06:28

what I can say is that I keep thinking

play06:30

about a news story involving Avengers

play06:33

director Joe Russo in which he said that

play06:35

he fully expected full AI movies within

play06:38

2 years that story came out in April of

play06:42

2023 so you know Tick Tock moving on if

play06:45

you're not great with 3D or just

play06:47

downright suck at it like I do uh I

play06:49

think now is a really good time to start

play06:51

learning some very basic fundamentals

play06:53

about it because I do think that a lot

play06:55

of creative AI is going to be moving in

play06:57

that direction as the year prog

play06:59

progresses I'll have some news on that

play07:01

in just a little bit now I don't think

play07:02

that you need like a 70-hour tutorial on

play07:05

blender unless you're interested in it

play07:07

in which case go for it you can start

play07:09

with something a little bit simpler

play07:11

which brings us to Meshi who are

play07:13

sponsoring today's video so Mesi is an

play07:16

AI text2 3D model generator it is free

play07:20

with 200 credits per month so that's

play07:21

perfect if you're just getting started

play07:23

although it does have paid tiers

play07:24

obviously as you ramp up and I do have a

play07:27

coupon code for you as well we'll get

play07:29

that in just a minute Mesi is very easy

play07:31

to use you just come down to this like

play07:33

text to 3D button here um you know you

play07:35

can describe your object here put in a

play07:38

style this would be sort of the

play07:40

descriptors that you want for your

play07:42

object and then negative prompts as well

play07:44

below that you can choose from a number

play07:46

of different art styles ranging from

play07:48

realistic uh voxel which is kind of that

play07:50

Minecraft um you know Lego block look um

play07:53

all the way down to like realistic

play07:55

handdrawn and cartoon line art uh I

play07:58

quickly generated a up a spaceship and

play08:00

yeah there you go a 3D spaceship now I

play08:03

didn't give it too many details uh the

play08:05

style was just highly detailed sci-fi

play08:06

Unreal Engine and I did not give it any

play08:08

negative prompts you do also have

play08:10

texture options over here for color or

play08:12

PBR uh PBR as I learned has to do with

play08:15

sort of reflectivity I will say that

play08:17

Meshi really excels when you're

play08:19

generating things like props or if

play08:21

you're aiming for more of a like cartoon

play08:24

or animated style when you aim for the

play08:26

realistic stuff it gets a little bit on

play08:28

the wonky side we'll take look at that

play08:29

in one second but first let's generate

play08:31

us a cute Dragon so with the very simple

play08:33

prompt cute baby dragon you end up with

play08:36

four kind of like low res versions of

play08:40

dragon options like we could go in and

play08:42

do a lot of other stuff in the prompt to

play08:44

call out certain colors or whatnot um

play08:46

but once you find one that you like if

play08:49

you hit this refine button you will end

play08:51

up with a refined version so you can see

play08:54

definitely the model has improved on a

play08:57

lot of details so circling back to the

play08:59

the realistic side or at least quote

play09:00

unquote realistic in this case I did try

play09:02

generating up a superhero full body pose

play09:05

in a comic book style and we did get our

play09:08

guy although you will see that there are

play09:10

some issues uh going on in the face

play09:13

there um that's that's fine that's if

play09:15

that's where the technology is right now

play09:17

that's where it is if you are somebody

play09:19

that is good with 3D I'm sure that that

play09:21

would be a super easy fix but for

play09:23

someone like me I mean honestly the

play09:25

simplest solution is just to start

play09:26

generating Up characters that are either

play09:28

wearing helmets or Mas like this kind of

play09:30

Warhammer inspired Space Marine so if

play09:32

you do end up with a model that you like

play09:33

you just don't like the overall kind of

play09:36

look or Vibe of it you can always

play09:37

download it uh and just download it as

play09:39

an fbx uh and then kick back over to the

play09:42

main menu where you can use this AI

play09:44

texturing module um you simply come and

play09:47

hit new project uh we'll describe this

play09:49

as robot uh and then upload your fbx

play09:52

file here once that's in you see that

play09:54

our model is here but textureless uh and

play09:57

then you can come through and prompt for

play10:00

what textures you would like to see for

play10:02

example here we take our untextured

play10:03

model and run it with the prompt a

play10:05

cyberpunk robot with black metal armor

play10:08

uh and yeah we get this which looks

play10:10

pretty cool now you might be thinking

play10:12

well that's kind of cool but what can I

play10:13

do with that it's just a character

play10:15

that's just sort of standing there I am

play10:17

not a 3D animator for that we're going

play10:19

to download our model and take it over

play10:21

to adobe's miimo so Adobe miimo is a

play10:24

completely free uh Auto riger for 3D

play10:27

characters it basically builds you know

play10:29

the skeleton for 3D characters as well

play10:32

as provides a number of animations for

play10:35

it actually don't know why mixim Mo

play10:36

isn't bigger than it is it seems to be

play10:38

kind of one of those like lost forgotten

play10:40

experiments of adobe they actually do

play10:42

provide a number of different characters

play10:45

here but what's really cool is that you

play10:46

can upload your own characters so uh

play10:49

we're going to take the zip file that we

play10:51

downloaded of our cyber Punk Robot from

play10:54

here you just basically Point these dots

play10:56

you know chin wrists Etc and and once

play10:59

you have everything lined up uh hit the

play11:01

next button and in just a few minutes we

play11:03

have our character completely rigged up

play11:05

and we have total camera control over

play11:07

everything it's really kind of a lot of

play11:09

fun now you will notice that our

play11:11

character is a little bit on the low res

play11:14

side right now uh that's just a mixo

play11:16

limitation to get those textures back

play11:18

and into full res you would bring it

play11:20

into something like blender um we're not

play11:22

going to get into that cuz like I said

play11:23

that is a rabbit hole that is 70 hours

play11:26

deep so if you're just dipping your toes

play11:28

into 3D like I am I think that Meshi

play11:30

provides a really cool solution in that

play11:32

you can generate up assets and then

play11:34

bring them into a 3D software package

play11:37

and start playing around with it if

play11:38

you're on sort of a higher 3D level

play11:40

Meshi does have a number of tutorials um

play11:43

for you on incorporating it into blender

play11:46

or Unity which they also do have a

play11:48

plugin for yeah all of this is way above

play11:50

my head though you can also apparently

play11:51

generate your Textures in meshy up to 4K

play11:54

so yeah that's that's pretty cool the

play11:56

link to meshy is down below once again

play11:58

they do do have a free tier but they

play12:00

also have a pro and a Max tier as well

play12:03

if you are interested in subscribing to

play12:05

either of those if you use the coupon

play12:07

code Theo t h EO uh you get 20% off my

play12:11

thanks to meshy for sponsoring this

play12:13

video Moving on but still kind of

play12:15

staying within the realm of 3D I do love

play12:17

a good magic trick and Nick Beller uh

play12:20

posted this up which is just really

play12:22

super cool check this out yeah yeah yeah

play12:26

yeah that's really cool it's a 3D object

play12:28

in front of of the TV that has that

play12:30

background yeah it's really awesome so

play12:33

to accomplish this Nick actually

play12:34

sculpted the character in the

play12:36

PlayStation software dreams and did so

play12:39

in VR as well so uh yeah it's pretty

play12:41

awesome it is a real shame that Sony

play12:43

kind of gave up on dreams uh I do hope

play12:45

that somebody ends up picking up the

play12:47

technology especially as we're making

play12:48

this push into to VR with things like

play12:51

you know the Apple Vision Pro from there

play12:53

Nick took the model out into AR software

play12:56

I believe it was Luma Labs that he ended

play12:57

up using um yeah and there you go and

play13:00

finally using a still image export of

play13:03

the environment on his TV kind of like

play13:05

it's a duct tape version of Disney's the

play13:06

volume and placing the AR character in

play13:09

front of it we we end up with this which

play13:11

is I don't it's so cool I just love

play13:12

experiments like this speaking of Dreams

play13:15

which was developed by a company called

play13:16

media molecule which I do not believe

play13:18

exists anymore uh one of the programmers

play13:21

and co-founders of media molecule uh

play13:24

Alex Evans is now working at Mid Journey

play13:27

turns out you can learn a lot just by

play13:28

stumbling around LinkedIn now I don't

play13:30

necessarily know what work Alex is doing

play13:33

at Mid Journey he is listed as a

play13:34

principal research engineer and is doing

play13:37

so remotely actually in the UK as well

play13:39

but I don't think it's too much of a

play13:40

stretch to think that he might be

play13:42

working on some of the 3D aspects of mid

play13:45

journey I do think that the general

play13:46

consensus when it comes to all of this

play13:48

is that you will be able to you know

play13:50

prompt an image and then have camera

play13:52

rotational tools and be able to sort of

play13:54

move around in 3D space for your 2D

play13:57

images but I mean honestly who knows

play13:59

given the fact that Alex is working with

play14:01

mid journey I mean who knows what this

play14:02

is going to look like but it is

play14:04

interesting to think that there is going

play14:05

to be a little bit of that dreams DNA in

play14:08

mid Journey also did you know that mid

play14:09

Journey has a head of Hardware I mean I

play14:11

had to go digging into that and as it

play14:13

turns out yeah in an office hours that I

play14:15

missed but luckily Nick St Pierre was at

play14:18

um Mid journey is building Hardware in

play14:20

office hours yesterday they mentioned a

play14:21

newly formed Hardware team it's

play14:23

currently focused on collecting data for

play14:25

3D it's going to be an orb it's

play14:28

described is a device that enables

play14:29

anyone to organize and manage tens of

play14:32

thousands of virtual 3D spaces I mean I

play14:34

don't I still don't know what that means

play14:36

but I don't know I kind of want one

play14:37

rounding out we have the first AI City

play14:40

uh you might remember a while back the

play14:41

simulation created a fully AI generated

play14:45

South Park episodes well they're back

play14:47

this time with s Francisco props on that

play14:50

name so s Francisco is populated by a

play14:53

bunch of AI agents all of whom have been

play14:55

prompted to have wants needs and desires

play14:58

and interact and learn from one another

play15:01

the agents have instructions to fall in

play15:03

love and in fact the agents do have

play15:05

lifespans so they actually die as well

play15:08

additionally they relax by playing old

play15:10

school Nintendo games up up down down

play15:12

left right left right kids now I'll say

play15:14

the overall graphical interface of s

play15:17

Francisco is very much in line with that

play15:19

South Park animation that they did as

play15:21

you can see we're kind of scrolling

play15:23

around through the city here zering in

play15:25

on one of the characters and uh you know

play15:27

we see her talking to and Alexa it very

play15:29

much looks like the South Park style of

play15:32

Animation so yes while the overall

play15:34

visual presentation does look a lot like

play15:36

something we've already seen uh I'm

play15:38

actually a lot more interested in what's

play15:40

happening under the hood because

play15:41

apparently San Francisco is just running

play15:43

all the time so it's kind of like a

play15:45

hyperactive version of The Sims That

play15:48

Never Ends the ultimate goal that the

play15:50

simulation are trying to achieve with

play15:52

Sim Francisco is Agi and I I mean I

play15:55

don't know who knows maybe it'll work it

play15:57

actually really reminds me of uh the old

play15:58

Simpsons episode with Lisa's tooth if

play16:00

there's one thing that we've learned if

play16:02

the Simpsons did it it ends up coming

play16:04

true on that note I thank you for

play16:05

watching my name is Tim

Rate This

5.0 / 5 (0 votes)

您需要『中文』的总结吗?