SHOCKING Robots EVOLVE in the SIMULATION plus OpenAI Leadership Just... LEAVES?

AI Unleashed - The Coming Artificial Intelligence Revolution and Race to AGI
4 May 202430:20

Summary

TLDRThe video script discusses recent advancements in AI, focusing on the rapid progress of Figure 1's AI and robotics, particularly their robot's ability to perform tasks like handling an orange using visual reasoning. It also touches on the ethical considerations of training robots by kicking them and the potential legal issues surrounding AI and copyright. The script introduces Dr. Eureka, an AI agent that automates the process of training robots in simulation and bridging the gap to real-world deployment. Additionally, it covers the departure of two executives from OpenAI and the potential implications for AI-generated content and copyright law. The video also explores the concept of multi-token prediction to improve language models' efficiency and the release of Devon 2.0, an AI agent capable of performing complex tasks. Finally, it mentions the development of wearable AI devices, such as open-source AI glasses that can provide real-time information and assistance.

Takeaways

  • 🤖 The advancements in AI robotics are significant, with Figure 01 showcasing an AI-driven robot that can perform tasks like identifying healthy food options through visual reasoning.
  • 📈 Figure One is utilizing pre-trained models via OpenAI to output common sense reasoning, indicating a trend towards integrating AI with robotics for enhanced functionality.
  • 🔧 The robot's ability to grapple an orange is facilitated by an in-house trained neural network, highlighting the role of neural networks in translating visual data into physical actions.
  • 📱 Concerns are raised about the practice of 'kicking' robots for demonstration purposes and the ethical implications of training AI through adversarial means.
  • 🐕 Dr. Jim Fan discusses training a robot dog to balance on a yoga ball using simulation, emphasizing the potential for zero-shot learning transfers to the real world without fine-tuning.
  • 🌐 The introduction of Dr. Eureka, an LLM agent that writes code for robot skill training in simulation and bridges the simulation-reality gap, represents a step towards automating the entire robot learning pipeline.
  • 📚 Eureka's ability to generate novel rewards for complex tasks suggests that AI can devise solutions that differ from human approaches, potentially offering better outcomes for advanced tasks.
  • 📉 Two senior executives from OpenAI, Diane Yun and Chris Clark, have left the company, raising questions about the reasons behind their departure and the impact on the organization.
  • 📄 A paper by Ethan M discusses copyright issues for AI-generated content, proposing a framework for compensating copyright owners based on their contribution to AI generative content.
  • 🔑 The paper suggests that the act of 'reading' or training on copyrighted material by AI models may not be copyright infringement in itself, but rather the reproduction of similar works.
  • 📈 Research indicates that training language models to predict multiple future tokens at once can lead to higher sample efficiency and faster inference times, which could significantly improve the performance of large language models.
  • 🧊 Devon 2.0, an AI agent, is capable of performing complex tasks such as creating a website to play chess against a language model and visualizing data, although it may encounter bugs that need fixing.

Q & A

  • What is the significance of Figure 01's robot and its AI capabilities as mentioned in the transcript?

    -Figure 01's robot, equipped with AI, is significant because it demonstrates the integration of robots and AI as the next frontier in technology. The robot showcased on '60 Minutes' is capable of common sense reasoning and can perform tasks like selecting a healthy food item over an unhealthy one based on visual cues, which is a step towards more autonomous and intelligent machines.

  • How does the robot in the transcript determine which object to pick based on the request for something healthy?

    -The robot uses visual reasoning via its cameras to identify objects within its field of view. It is connected to a pre-trained model via Open AI, which helps it to output common sense reasoning. When asked to hand over something healthy, it recognizes the orange as the healthy choice instead of the chips.

  • What is the role of Dr. Jim Fan in the development of AI and robots as described in the transcript?

    -Dr. Jim Fan is involved in training robots using simulations and transferring those skills to the real world without fine-tuning. He is also associated with the development of Dr. Eureka, an LLM agent that writes code to train robot skills in simulation and bridges the simulation-reality gap, automating the pipeline from new skill learning to real-world deployment.

  • What is the concern raised about training AI on internet footage?

    -The concern raised is the ethical implications of training AI systems using footage that may have been obtained without consent, such as recording robots being kicked or abused and then using that footage to train AI. This raises questions about consent, privacy, and the potential for misuse of such technology.

  • How does the proposed Dr. Eureka system differ from traditional simulation-to-real transfer methods?

    -The Dr. Eureka system automates the process of transferring skills from simulation to the real world, which traditionally required domain randomization and manual adjustments by expert roboticists. Instead of tedious manual work, Dr. Eureka uses AI to search over a vast space of sim-to-real configurations, enabling more efficient and effective training of robots.

  • What is the potential impact of GPT-5 on the process described in the transcript?

    -The potential impact of GPT-5, as inferred from the capabilities of GPT-4, could be significant. It suggests that with the advancement to GPT-5, the process of sim-to-real transfer and the tuning of physical parameters such as friction, damping, and gravity could become even more efficient and accurate, potentially leading to better performance in real-world applications.

  • What is the main idea behind training robots in simulation as discussed in the transcript?

    -The main idea is to allow robots to learn and master various skills in a simulated environment that mimics the real world's physics. This enables the robots to learn complex tasks like walking, balancing, opening doors, and picking up objects, which can then be transferred to real-world scenarios, increasing efficiency and reducing the need for physical trials and errors.

  • What is the role of Nvidia's Isaac SIM in the context of the transcript?

    -Nvidia's Isaac SIM is mentioned as a platform where the physics of the simulated world exist just like in the real world, but it allows for running simulations at a much faster pace. This high-speed simulation capability is crucial for training robots efficiently and testing various scenarios before deploying them in the real world.

  • How does the Eureka algorithm contribute to the automation of the robot learning pipeline?

    -The Eureka algorithm contributes by teaching a robot hand to perform complex tasks like pen spinning within a simulation. It takes the process further by automating the entire pipeline from learning new skills in simulation to deploying those skills in the real world, reducing the need for human intervention in the training process.

  • What is the proposed framework for dealing with copyright issues for AI-generated content as mentioned in the transcript?

    -The proposed framework aims to compensate copyright owners based on their contribution to the creation of AI-generated content. The metric for contributions is determined quantitatively by leveraging the probabilistic nature of modern generative AI models, suggesting a potential solution for the debate on copyright infringement in AI training.

  • What is the potential impact of multi-token prediction in training language models as discussed in the transcript?

    -Multi-token prediction could lead to higher sample efficiency and improved performance on generative benchmarks, especially in tasks like coding. It also suggests that models trained this way can have up to 3 times faster inference, even with large batch sizes, which could significantly enhance the development of algorithmic reasoning capabilities in AI.

Outlines

00:00

🤖 Advancements in AI and Robotics

The video discusses recent breakthroughs in AI and robotics, highlighting Figure 01's advancements and the potential of combining AI with robots. Brett Adcock, the founder of Figure AI, is featured discussing a robot that uses pre-trained models to perform tasks like selecting a healthy food item. The video also touches on the ethical concerns of training robots through internet footage and the progress in sim-to-real transfer of skills, as demonstrated by a robot dog balancing on a yoga ball. Dr. Jim Fan's work on training robots in simulation is mentioned, along with Dr. Eureka, an AI agent that automates the process of training robot skills in simulation and deploying them in the real world.

05:03

📈 Eureka's Iterative Learning and Novel Rewards

The script explains the iterative process behind Eureka's learning mechanism, where GPT 4 is used to generate and test reward functions for training robots. As tasks become more complex, Eureka generates novel rewards that differ from human-crafted solutions, proving to be more effective for advanced tasks. The video also discusses the latest advancements with Dr. Eureka, which focuses on language model-guided sim-to-real transfer, taking the learnings from simulated environments and applying them to real-world robotics, as showcased by a robot balancing on a ball.

10:03

🚀 Domain Randomization and Open AI Drama

The video talks about domain randomization in simulations to prepare robots for the imperfections of the real world, making them more robust. It then shifts to discussing recent departures at Open AI, where two senior executives left the company. The content also covers a paper by Ethan M on dealing with copyright issues for AI-generated art, proposing a framework for compensating copyright owners based on their contribution to AI-generated content.

15:04

🧠 Multi-Token Prediction in Language Models

The script delves into a paper suggesting that training language models to predict multiple future tokens at once can lead to higher sample efficiency and better performance on generative benchmarks like coding. It discusses the potential for faster inference times with large models and the implications for algorithmic reasoning capabilities. The video also mentions a speech by Andrej Karpathy, who has worked on AI agents and large language models, and his insights into the development and future of these technologies.

20:05

💡 Devon 2.0 and AI's Impact on Software Development

The video discusses the release of Devon 2.0, an AI agent capable of performing tasks like creating websites and data visualizations. It provides examples of Devon's capabilities, including handling multiple asynchronous tasks and generating live apps. The speaker expresses skepticism about extreme views on AI replacing software developers, suggesting that Devon and similar AI agents will likely improve the efficiency of developers rather than replacing them.

25:06

🕶️ Wearable AI Devices and Their Potential

The video introduces a wearable AI device that uses augmented reality and artificial intelligence to assist users in various tasks. The device, called Frame, has an AI assistant named Noah that can orchestrate a potentially infinite number of AI models. It demonstrates the device's ability to provide fashion advice, find matching clothing, and generate images based on the user's surroundings. The speaker expresses enthusiasm for the potential of such devices to become a part of everyday life.

30:07

📦 Frame AI Glasses Shipping Update

The video concludes with an update on the shipping status of Frame AI glasses, which are set to start shipping to customers in the coming weeks.

Mindmap

Keywords

💡AI News

AI News refers to the latest developments and breakthroughs in the field of artificial intelligence. In the video, it sets the stage for discussing various advancements such as robotics, AI models, and wearable devices, indicating the rapid pace of innovation in AI.

💡Figure 01

Figure 01 is mentioned as a company that is pioneering the integration of robots with AI. The video highlights their progress in creating robots capable of common sense reasoning and physical tasks, showcasing the company's role in pushing the frontier of AI and robotics.

💡Simulation-to-Real Transfer

Simulation-to-Real Transfer is a process where skills and behaviors learned by AI in a simulated environment are transferred to real-world applications. The video discusses how Dr. Eureka, an AI agent, automates this process, making it more efficient and effective for training robots.

💡GPT-4

GPT-4, or Generative Pre-trained Transformer 4, is a large language model that is part of the discussion on how AI systems are being used to generate code and improve reward functions in training AI models. It represents the advancement in natural language processing within AI.

💡Domain Randomization

Domain Randomization is a technique used in AI training where the parameters of the simulation are varied randomly to enhance the robustness of the AI model. The video explains how this technique helps AI models like the robot balancing on a ball to adapt better to real-world conditions.

💡Eureka Algorithm

The Eureka Algorithm is an AI technique used for teaching complex tasks to robots, such as pen spinning. The video emphasizes its ability to automate the learning pipeline and improve the success rate of tasks over iterations, showcasing its significance in the field of AI robotics.

💡Wearable Device

Wearable Devices are a topic of discussion as the video mentions a new type of AI glasses that offer augmented reality and AI assistance. These devices represent the convergence of technology with fashion and practical use, aiming to provide users with immediate and context-aware information.

💡AI Agents

AI Agents, such as Dr. Eureka and Noah (in the context of the video), are autonomous systems that perform tasks, make decisions, and learn from experiences. They are central to the video's narrative on AI's evolving capabilities and their potential to automate complex processes.

💡Multi-Token Prediction

Multi-Token Prediction is a method proposed for training language models to predict multiple future tokens (words or phrases) at once, which could lead to more efficient and effective AI models. The video discusses how this approach might significantly improve the performance of AI in generative tasks.

💡Copyright Issues for AI

The video touches on the legal and ethical concerns surrounding AI systems trained on copyrighted material. It discusses the need for a framework to compensate copyright owners based on the contribution of their work to AI-generated content, reflecting the ongoing debate on intellectual property in the AI era.

💡Devin (AI Assistant)

Devin is an AI assistant that is showcased for its ability to perform complex tasks such as creating websites and data visualizations. The video illustrates the potential of AI assistants to automate and streamline software development processes, while also acknowledging their current limitations.

Highlights

Figure 01, an opening eye backed robot, is advancing rapidly in AI integration.

Brett Adcock, founder of Figure AI, discusses the potential of robots plus AI as the next great frontier.

AI on Figure One's robot demonstrated common sense reasoning during a 60 Minutes feature.

The robot's neural network maps camera pixels to robot action at 10 Hz.

Dr. Jim Fan's team trained a robot dog to balance and walk on a yoga ball using simulation, requiring no fine-tuning for real-world application.

NVIDIA's Isaac GYM allows for AI training at accelerated speeds, similar to Dragon Ball Z.

Dr. Eureka, an LLM agent, automates the pipeline from new skill learning to real-world deployment.

Eureka's algorithm teaches a FiveFinger robot hand to spin a pen in a simulation.

GPT-4 is utilized to improve the robot training process through positive and negative reinforcements.

Eureka generates novel rewards, offering solutions that surpass human-generated models for complex tasks.

Dr. Eureka's language model guides sim-to-real transfer, enhancing real-world robot performance.

Open AI faces internal changes with two senior executives leaving the company.

A framework is proposed to compensate copyright owners for contributions to AI generative content.

Multi-token prediction in language models could lead to faster and more efficient AI systems.

Andrej Karpathy discusses the potential impact of new AI research on large language models.

Devin 2.0 showcases capabilities in creating complex applications like a chess-playing website and a data visualization map of Antarctica's sea temperatures.

Devin's asynchronous task handling allows for multiple tasks to be processed simultaneously.

Brilliant Labs introduces an open-source, lightweight AI glasses platform with an AI assistant named Noah.

Noah, the AI assistant in the glasses, can generate images and perform tasks based on the user's visual input.

Transcripts

play00:00

there's some massive AI news on the

play00:03

horizon and it's quite

play00:06

shocking first on to scene is figure 01

play00:10

the opening eye backed robot that is

play00:13

advancing quite rapidly here's Brett

play00:15

Adcock the founder of figure1 of figure

play00:18

AI the company he's saying robots plus

play00:21

AI are the next great Frontier we demoed

play00:24

some of the latest AI on our robot

play00:26

during 60 Minutes last week I'm not sure

play00:28

if 60 Minutes is a Twitter account or

play00:30

what but seems big figure one is

play00:33

connected to a pertained I'm guessing

play00:35

that's pre-trained model via open AI to

play00:38

Output Common Sense reasoning so when

play00:41

bill says hand me something healthy the

play00:43

robot by visual reasoning via robot

play00:45

cameras knows this is the orange in the

play00:48

scene and not the chips the behavior of

play00:50

this robot grappling the orange was an

play00:53

in-house trained neural network the

play00:55

models is mapping camera pixels at 10 HZ

play00:58

to robot action

play01:00

speaking of robots you know how like we

play01:02

tend to kick them for demonstrations

play01:04

like we got to stop doing that now it's

play01:07

even worse we're kicking whatever

play01:08

they're standing on from underneath them

play01:10

or trying to at least we're doing that

play01:12

we're recording the evidence we're

play01:14

taking the footage we're putting that

play01:16

footage on the internet and then we

play01:17

train the AI brains on the internet like

play01:21

is no one else concerned about this no

play01:23

just me all right fine so Dr Jim fan is

play01:26

saying we trained a robot dog to balance

play01:28

and walk on top of a yoga ball purely in

play01:31

similation and then transfer zero shot

play01:33

to the real world no fine-tuning just

play01:36

works and we've been talking about this

play01:38

on this channel for quite a bit this

play01:41

idea of training robots in simulation

play01:43

having them figure out the skills to to

play01:46

walk to move to balance to open doors to

play01:49

pick up and transfer packages around

play01:51

something like nvidia's Isaac JY can run

play01:55

these assimilated worlds where the

play01:57

physics exists just like they would in

play02:00

our world but they're able to run it at

play02:02

10,000 times the speed like Dragon Ball

play02:05

Z and he continues I'm excited to

play02:07

announc Dr Eureka an llm agent that

play02:10

writes code to train robot skills in

play02:13

simulation and writes more code to

play02:15

bridge the difficult simulation reality

play02:18

Gap it fully automates the pipeline from

play02:20

new skill learning to real world

play02:22

deployment if you recall when we covered

play02:25

the original Eureka paper on this

play02:27

channel you realize how past this is

play02:30

moving the yoga ball task is

play02:32

particularly hard because it's not

play02:34

possible to accurately simulate the

play02:36

bouncy ball surface yet Dr Eureka has no

play02:39

trouble searching over a vast space of

play02:41

sim tooreal configurations and enables

play02:44

the doc to steer the ball on various

play02:46

terrains even walking sideways

play02:48

traditionally the Sim tooreal transfer

play02:50

is achieved by domain randomization a

play02:53

tedious process that requires expert

play02:55

human roboticists to stare at every

play02:58

parameter and adjust by hand that was

play03:00

the big breakthrough that we saw in

play03:03

Eureka instead of tedious work done by

play03:06

humans we just quote unquote Outsource

play03:08

it to GPT 4 what do you think happens

play03:11

when we have GPT 5 in the scene might

play03:14

this process get better maybe Dr Jim fan

play03:17

continues Frontier lmms like GPT 4 have

play03:20

tons of built-in physical intuition for

play03:23

friction damping stiffness gravity Etc

play03:27

we are mildly surprised to find that Dr

play03:29

can tune these parameters competently

play03:32

and explain its reasoning well Dr Eureka

play03:35

Builds on our work on our prior work

play03:37

Eureka the algorithm that teaches a

play03:39

FiveFinger robot hand to do pen pen

play03:42

spinning it takes one step further on

play03:44

our quest to automate the entire robot

play03:47

learning pipeline by an AI agent system

play03:50

one model that outputs strings will

play03:53

supervise another model that outputs

play03:55

torque control all right so here's

play03:57

Eureka this was the original one right

play03:58

so this is not what he's talking about

play04:00

this is the predecessor let's see when

play04:03

this was published because it wasn't

play04:05

that long ago so looks like revised

play04:09

April 30th looks like originally

play04:11

appeared October 2023 and Eureka did

play04:14

something really cool it taught a robot

play04:16

hand this Shadow hand inside a

play04:19

simulation right inside of something

play04:21

like nvidia's Isaac Jim to to spin a pen

play04:24

or a pencil you know how kids do in

play04:26

school sometimes in various different

play04:29

right directions

play04:30

upside down right side up Etc different

play04:32

configuration of fingers Etc which has

play04:35

never been done before it was a very

play04:37

complex problem but how they did it was

play04:40

perhaps even more interesting because

play04:43

what they did is they set up this

play04:45

architecture where so this is GPT 4

play04:49

right so large language model in this

play04:51

case GPT 4 keep this in mind this this

play04:54

section is uh this area is improving

play04:57

constantly so this was you know at the

play04:59

end of last year this is how good it was

play05:02

where is this going to be a year from

play05:04

now 5 years from now right so we have

play05:06

the environment code and the task

play05:08

descriptions the environment code is the

play05:10

Isaac gem so we're writing reward

play05:12

functions for these robots we're

play05:13

basically saying when it does something

play05:15

right we give it an attab boy or if it's

play05:17

doing something wrong we write in a

play05:18

penalty we're trying to teach it how to

play05:20

do things with positive and negative

play05:22

reinforcements it's the environment code

play05:24

we feed it into GPT 4 and GPT 4 produces

play05:28

sampling on the various reward functions

play05:30

that it could run so it just produces a

play05:32

bunch of them right and they're they're

play05:34

getting tested they're put into the GPU

play05:37

accelerated reinforcement learning so

play05:39

the simulation if you will right and

play05:42

these shadow hands right sit there in

play05:45

the simulation and uh spin these things

play05:48

at you know 10,000 times the speed of

play05:51

the flow of time in our universe if you

play05:54

will right and then we get the results

play05:55

back and we put those results back we

play05:58

provided two GP pt4 and they're like

play06:00

well here's how well your code did right

play06:03

and it's saying please carefully analyze

play06:04

the policy feedback and provide a new

play06:07

improved reward function right so it

play06:09

does it we test it we give it back to it

play06:11

and we say here's your results do better

play06:14

and it does it again and we're like do

play06:16

better and it does it again and we're

play06:17

like do better and it figures out how to

play06:19

train these robots better and better

play06:22

over the different iterations it keeps

play06:24

improving so these are the Eureka

play06:26

iterations right so you see as it goes

play06:28

from 1 2 3 4 5

play06:30

the success rate right it keeps

play06:32

improving but there's one last thing

play06:34

that's going to blow your mind and that

play06:36

is this Eureka generates novel rewards

play06:41

so I encourage everybody to read this

play06:43

paper because I'm not going to do it

play06:44

justice by explaining it simply if you

play06:47

want to dive deep into it I strongly

play06:49

suggest you read it but basically it

play06:51

seems like for simple tasks you know

play06:54

both the AI trained model and the human

play06:57

results right they're they're similar

play07:00

both the AI and the humans we write

play07:01

similar types of code for these reward

play07:03

models but where it gets interesting is

play07:06

as those tasks get more and more

play07:08

complicated right so for example with

play07:10

pen spinning or this thing where it's

play07:13

has that Rubik's Cube or whatever it's

play07:14

like rotating it or these little

play07:17

helicopters quadcopters whatever you

play07:19

want to call them so for those more

play07:20

complex tasks the harder task is the

play07:23

less correlated the Eureka rewards and

play07:26

the rewards that it writes they're novel

play07:28

so that means as the tasks get harder

play07:32

humans have a hard time coming up with

play07:34

the good Solutions the AI comes up with

play07:37

different solutions than the humans

play07:39

would come up with and those Solutions

play07:41

are better for those Advanced tasks if

play07:44

you're getting what I'm saying it's

play07:47

probably kind of blowing your mind but

play07:50

but getting back to this new thing that

play07:52

just came out Dr Eureka so it's language

play07:55

model guided simp to real transfer now

play07:58

we're beginning to to take the learnings

play08:01

of those robots in simulation and we're

play08:03

applying it to the real world and we're

play08:05

getting this dog that's able to balance

play08:08

on top of a ball and as far as I can

play08:10

tell I mean I don't think these are

play08:13

cherry-picked results results I'm sure

play08:15

fell off once or twice during testing

play08:17

but I got to say number one certainly

play08:20

it's better than any human can balance

play08:23

on something like that but I almost feel

play08:25

like we're getting to the point where

play08:27

maybe it's better than any animal I mean

play08:29

it's either already better than any

play08:31

animal that size good balance on it I

play08:33

mean without like using its claws to dig

play08:35

in notice it's pretty smooth the legs

play08:37

are smooth right so maybe it's got some

play08:39

rubber padding on there for friction but

play08:42

it's not really gripping the ball it's

play08:45

specifically take a look at this one

play08:46

right here on the right in the center

play08:49

how what it does when it starts slipping

play08:52

off of the ball there it goes so it's

play08:54

kind of like getting faster and faster

play08:56

almost overcorrects still catches itself

play08:59

and still gets to kind of the center

play09:02

it's it's insane anyway so this is the

play09:04

Dr Eureka component so it's probably

play09:07

very similar to the original Eureka I

play09:10

haven't read the paper yet but I I am

play09:12

planning to we might have to do a deep

play09:14

dive because this is absolutely out of

play09:16

this world but as far as I can tell so I

play09:18

mean we're taking this simulation right

play09:21

similar to what we did with Eureka in

play09:23

fact this is the Eureka reward design

play09:25

initial policy learning so that's what

play09:27

we covered previously in the Eureka

play09:28

right and that reward function that gp4

play09:31

or whatever comes up with that gets fed

play09:33

into this final Sim toore policy

play09:35

learning right so it learns those things

play09:38

in the simulation and then that that

play09:40

code that neural Nets that sort of data

play09:43

gets loaded into real world deployment

play09:46

and here they have kind of the uncut

play09:49

deployment video so we'll just take a

play09:51

look at this for just a little bit so

play09:53

here I mean it's looking very good and

play09:56

they have more of a long shot of this

play09:59

thing moving around I'll post it below

play10:01

if you want to take a look at it but but

play10:03

I kind of fast forwarded through this

play10:04

thing as far as I can tell it never

play10:07

falls off it stays on that ball walking

play10:10

for 5 minutes straight doesn't mess up

play10:13

doesn't fall off and here's one where

play10:15

they're actually deflating the ball as

play10:17

it as it goes I mean the sound is on

play10:20

just because you can hear the the thing

play10:21

deflating so it's rapidly deflating as

play10:24

the thing is balancing and here I'll

play10:26

skip forward just a little bit so it's

play10:27

like maybe 80% inflated and it's still

play10:30

managing to stay on top of it even

play10:34

though it's a completely kind of

play10:36

different environment and it looks like

play10:38

finally kind of falls off and I think

play10:40

that has uh something to do with the

play10:42

domain randomization so basically if you

play10:44

saw those deep mind uh soccer robots

play10:47

basically one of the things they were

play10:48

talking about there is they randomize a

play10:49

lot of the little things that go into

play10:52

that simulation that world right so

play10:53

they'll slightly randomize the friction

play10:56

coefficient they'll slightly randomize

play10:58

you know how much current electrical

play11:01

current is going to the robot's left leg

play11:03

versus its right leg right so just like

play11:05

in the real world not everything is

play11:08

perfect so we can't assimilate it as

play11:10

being perfect so we throw these random

play11:13

minor tweaks in there of various physics

play11:16

that the robots have to deal with and

play11:17

what they're saying is that that tends

play11:20

to make and that tends to make the

play11:22

robots very robust when they emerge in

play11:25

the real world they're able to deal with

play11:27

I mean as you saw there if the thing

play11:29

flating it's able to adapt anyways we

play11:32

have to come back and do the whole thing

play11:34

but I got to say at one point these

play11:36

things are going to get too fast too

play11:38

good and they might want to have a chat

play11:40

with us about all the stuff that we've

play11:41

put them through so I just want to say

play11:44

for the record I West R I am against

play11:46

this practice this is bad I do not stand

play11:49

for this I am and always have been on

play11:51

the robot side for the record don't kill

play11:53

me next up we have more open AI drama

play11:57

open AI is nothing without a drama two

play12:00

senior openi Executives leave the

play12:01

company two senior openi Executives VI

play12:04

vice president of people Diane Yun and

play12:06

Chris Clark head of nonprofit and

play12:08

strategic initiatives left the company

play12:10

earlier this week a company spokesperson

play12:12

said and yeah it's strange to hear this

play12:14

because you know both Executives were

play12:16

among the most long tenured managers at

play12:19

the developer of Chad GPT at openi right

play12:22

recently worth 86 billion in an employee

play12:25

share sale so I mean if you've been with

play12:27

the company from the beginning it grew

play12:29

to be worth 86 billion what would make

play12:32

you leave so that's according to the

play12:35

information I'm not going to to read the

play12:36

article because I mean it's their scoop

play12:38

it's the work that they did so I want to

play12:39

be respectful of that I'll link it down

play12:41

below the information is subscription

play12:44

based but if you're into AI I got to say

play12:47

and I'm not being paid to say this but

play12:48

that's the only subscription to like a

play12:51

publication that I have I think yeah

play12:54

like if you think about news

play12:55

Publications I I'm certain this is the

play12:58

only subscri description that I have

play13:00

right now Ethan M posts a paper about

play13:03

how to deal with copyright issues for AI

play13:05

arts and this is interesting somebody

play13:08

has a plain English rewrite of that

play13:10

paper we're probably going to see more

play13:13

of this so has kind of an overview plain

play13:15

English explanation technical

play13:16

explanation critical analysis conclusion

play13:19

so I'm not going to read the AI

play13:22

generated stuff because well it could be

play13:24

really good it could be really accurate

play13:26

but you really got to make sure you do

play13:28

it correctly cuz it's very easy to get

play13:30

nonsense but this is the abstract from

play13:33

the actual paper so this isn't AI

play13:34

generated this is the actual abstract

play13:36

from the actual paper so it's saying

play13:39

general you know AI systems are trained

play13:40

on large data there's a growing concern

play13:43

that this main Fringe on copyright

play13:44

interests right so if you're if m journe

play13:47

is training on artists images right

play13:49

those artists now have a competing AI

play13:51

model and they're not getting paid for

play13:53

it and there's been a whole debate about

play13:55

whether that's correct or not right so

play13:58

the people that are saying this is

play13:59

copyright infrigment of course are

play14:01

saying well you can't train the model

play14:03

not data whereas the people kind of argu

play14:05

against it are saying copyright

play14:07

protections were always about

play14:08

reproducing something not reading it

play14:11

right so like George R Martin the writer

play14:14

of Game of Thrones was a big fan of

play14:15

tolken he even borrowed the whole RR

play14:18

like JRR tolken we know he read the

play14:20

works and he created something similar

play14:22

but we're not saying it's copyright

play14:24

infringement even though we know he read

play14:27

tolken works so the reading is an

play14:29

infringement but he if he produced

play14:31

something very similar that would be

play14:33

infringement and so I think the big

play14:35

question is how we're going to treat AI

play14:37

models right does AI does an AI model

play14:41

reading something is that copyright

play14:43

infringement because we we haven't

play14:45

really applied that criteria before so

play14:48

it's it's an interesting legal situation

play14:50

that will be unfolding and basically

play14:52

here they're proposing a framework that

play14:53

compensates copyright owners

play14:55

proportionately to the contribution to

play14:57

the creation of AI gener content the

play15:00

metric for contributions is

play15:01

quantitatively determined by leveraging

play15:03

the probabilistic nature of modern

play15:05

generative AI models from cerative Game

play15:07

Theory and economics so it sounds like

play15:09

basically if let's say you're an artist

play15:11

and your artwork comprises 1% of the

play15:15

data the model is trained on and then

play15:17

they find that that model tends to use

play15:20

that data that portion of the data or

play15:22

whatever there's a certain probability

play15:23

of relying on that to produce artwork

play15:26

then maybe there's a formula I haven't

play15:28

read the space

play15:29

and maybe something like this will be

play15:31

adopted in fact this is what people are

play15:33

talking about but again this

play15:35

presupposes that reading something or

play15:38

looking at something by these AI systems

play15:41

is breaking copyright now of course if

play15:43

you've ever had a website you know how

play15:45

many crawl Bots hit your website you

play15:48

have Google and Bing and nowadays a lot

play15:51

of open Ai and just tons of other stuff

play15:54

right so they they're constantly

play15:55

crawling your site right we're not

play15:57

saying that's copyright infringement but

play15:59

if they copy and paste it to their own

play16:02

site word for word then it becomes

play16:04

copyright infringement right if I make a

play16:07

photocopy of a book that's not copyright

play16:09

infringement if I start selling those

play16:11

copies then that is copyright

play16:13

infringement so again I'm not arguing

play16:15

one way or another but we got to get

play16:16

some clarity on is reading something is

play16:20

training on something is that in itself

play16:23

copyright infringement or is copyright

play16:25

infringement strictly the act of

play16:27

reproducing something that that's

play16:29

similar and this is Elvis Omar SAR so

play16:32

we've looked at a few of his uh he

play16:35

covers various papers in Ai and I think

play16:38

he just recently launched a YouTube

play16:40

channel so Elvis saravia so really good

play16:43

person to follow really enjoy his stuff

play16:45

so he breaks this paper down I'll link

play16:48

everything down below if you want to

play16:49

follow him but basically my

play16:51

understanding is that the paper is about

play16:53

predicting multiple tokens that comes

play16:55

next so tokens are the smallest unit

play16:58

that these large language models like

play17:00

Chad GPT GPT 4 that they deal with right

play17:02

and usually it's a word right the is one

play17:06

token sky is one token is is one token

play17:09

but sometimes bigger words are multiple

play17:11

tokens or if you have a misspelling that

play17:13

could be multiple tokens so if I recall

play17:15

correctly like a a million tokens will

play17:18

be

play17:19

750,000 English words on average so

play17:22

that's kind of the rough breakdown uh

play17:24

and that can vary of course but how

play17:26

these large language models work right

play17:28

so we say the sky is right and it uses

play17:31

the neural networks to kind of figure

play17:33

out okay like what's the most likely

play17:35

word that comes next is it the No

play17:39

usually no blue you know is out of all

play17:42

the choices is the most likely one the

play17:43

sky is blue and that's kind of the

play17:46

underlying mechanism for a lot of this

play17:48

it's prediction when they say inference

play17:50

it means prediction sort of the output

play17:52

of these models but this paper is asking

play17:55

what would happen is if instead of one

play17:57

token prediction so the sky is blue one

play18:00

token what if we ask to predict multiple

play18:03

tokens and this could be kind of a big

play18:05

deal because as Elis says the most

play18:07

exciting paper of the week was that one

play18:10

from glockley at all that aims to train

play18:12

better and faster LM via multi- token

play18:15

prediction it's an impressive research

play18:17

paper but it sounds like in this work

play18:19

this is the paper that we're talking

play18:21

about better and faster language large

play18:23

language models via multi- toen

play18:26

prediction or I guess multi- token

play18:28

prediction is the right emphasis there

play18:30

in this work we suggest that training

play18:32

language models to predict multiple

play18:34

future tokens at once results in higher

play18:37

sample efficiency gains are especially

play18:40

so they're saying it works well that's

play18:42

that's the point of this paper they're

play18:44

saying these gains in in Effectiveness

play18:46

and efficiency are especially pronounced

play18:49

on generative benchmarks like coding

play18:51

coding is kind of a big deal as far as

play18:55

llms are concerned and they're saying

play18:56

where are models consistently outperform

play18:58

Strong baselines by SE several

play19:01

percentage points right the smaller 13

play19:04

billion parameter model solves 12% more

play19:06

problems on one Benchmark 177% more on

play19:09

another experiments on these like

play19:10

smaller tasks demonstrate that this new

play19:12

approach the multi- token approach is

play19:15

favorable so it works better for the

play19:17

development of algorithmic reasoning

play19:20

capabilities and as an extra benefit

play19:23

these models are up to 3x faster are at

play19:27

inference even with large badge sizes so

play19:29

meaning the outputs are three times

play19:31

faster even when we're working with

play19:33

large amounts of data with large amounts

play19:35

of output there's a speech by Andre

play19:38

karpathy he used to work on Tesla's

play19:40

autonomous driving then he went to open

play19:43

Ai and recently he left open AI or not

play19:46

recently I think it was last year and

play19:48

started his own YouTube channel also

play19:50

another great resource for understanding

play19:52

how these large language models work he

play19:55

in the past worked closely with Dr Jim

play19:57

fan and his main kind of area of work is

play20:00

AI agents but he was saying how when he

play20:02

was working for open AI they have their

play20:05

internal slack Channel where they're all

play20:06

messaging and I guess they post papers

play20:09

like this as they're coming out and I

play20:10

think he was saying that you know on an

play20:12

internal slack Channel they'll say

play20:14

things like oh yeah no we try that

play20:15

that's never going to work that's a dead

play20:17

end or they'll be like oh yeah they're

play20:19

onto something here so in other words it

play20:21

sounds like you know for us we see

play20:24

something like this and you know time

play20:26

will tell we'll see how big of an impact

play20:29

this will make on large language models

play20:31

maybe it'll be huge and it drastically

play20:33

improve how good they are how fast they

play20:35

are how easy it is to build them or

play20:38

maybe it'll be a dead end but there's

play20:40

this slack Channel at open AI where

play20:42

they're kind of like they just know

play20:44

ahead of time since they are kind of

play20:46

more advanced than than a lot of the

play20:48

stuff that that we're seeing which kind

play20:50

of interesting to think about I think

play20:52

next we have the release of Devon 2.0

play20:54

now I don't have access to this uh and

play20:57

you know as we've talked about on this

play20:59

channel you know a lot of people think

play21:00

oh Devon has been debunked or whatever I

play21:03

I I have my doubts I have I really have

play21:06

my doubts that it's vaporware of or or

play21:09

anything like that as more and more

play21:11

people are getting their hands on it I

play21:12

mean the results seem to be good right

play21:15

now it's important to understand that

play21:17

right there's some people that are kind

play21:18

of on this spectrum and they're saying

play21:20

like Devon these AI agents they're just

play21:22

they're nothing right they're completely

play21:24

useless and there's no applications and

play21:26

it's just all fake right and there's

play21:28

people kind on this side they're like

play21:29

that's it this is the end of you know

play21:32

this will put 100% of software

play21:34

developers out of business out of work

play21:36

etc right and the reality is I mean

play21:39

neither one of those stances is correct

play21:42

I mean if there's one thing we know is

play21:44

the the hyperbolic stuff is probably not

play21:47

correct I think Devon and these various

play21:49

AI agents I mean they're probably like

play21:50

somewhere here right and then as they

play21:52

improve they'll probably move like

play21:54

somewhere here right maybe over time

play21:56

they'll get to 100% but we're seeing

play21:59

this progress kind of happen kind of

play22:01

fast are they going to put people out of

play22:02

business or are they going to just aent

play22:05

how well software developers can work

play22:07

you know we'll see pretty soon what I'm

play22:09

saying is don't expect a complete flop I

play22:12

guarantee you it's not we've seen people

play22:14

credible people posting their results of

play22:16

Devon it obviously does some cool stuff

play22:19

we've also seen it do some not so smart

play22:22

stuff right so also don't expect this

play22:24

where it just takes over the whole world

play22:25

overnight it's somewhere here and and

play22:28

we're going to I see exactly where it is

play22:29

as it gets rolled out so here Andrew gal

play22:32

he's saying my first task I asked course

play22:33

this is I think he's talking about Devin

play22:36

2.0 or maybe it's either the original or

play22:39

the new and improved version actually

play22:41

he's saying this is so March 12th so I

play22:42

think this is the original so he's

play22:44

saying he asked it to create a website

play22:46

where you play chess against a large

play22:48

language model you make a move the moves

play22:50

communicate to GPT 4 gp4 replies Etc

play22:53

right so here was the prompt given to

play22:56

Devon Devon starts working and D if

play22:58

you're not aware so it has like four

play23:00

windows it's got like it's to-do list

play23:03

where it's breaking the tasks into

play23:04

subcategories it's got a browser

play23:06

developer developing environment and a

play23:08

way to chat back and forth with with you

play23:10

the user and it kind of works on its own

play23:12

time right so it kind of goes off and

play23:14

does stuff and it might run for a while

play23:17

right so it's not like immediate

play23:19

response it can do long Horizon tasks

play23:23

and so it looks like Devon goes through

play23:25

tons of stuff to figure out how to build

play23:27

this thing asking for API Keys it looks

play23:30

like and then you know it's currently

play23:32

trying to debug something with the

play23:34

chessboard not rendering but in the mean

play23:36

time asked it for you know this other

play23:38

task which is interesting right so you

play23:41

can have multiple sort of asynchronous

play23:43

tasks running also he's asking to create

play23:46

a map of Antarctica sea temperatures in

play23:48

last 50 years it looks like it opens up

play23:51

web pages that it needs to find the

play23:53

information that it needs so Andrew had

play23:55

to specify to make the hot temperatures

play23:58

blue blue and cold temperatures red

play24:00

which is unintuitive right so giving it

play24:02

a kind of a weird prompt not something

play24:05

that's expected and Devon notes it notes

play24:08

the color preference saying ohow

play24:10

deployed to net netlify automatically a

play24:13

live app and now the first version is

play24:15

fully functional but it has a bug which

play24:17

is to be expected now let's see if D can

play24:19

fix it but in conclusion it got stuck on

play24:21

the chest thing but it worked for

play24:22

Antarctica data Vis visualization even

play24:25

deployed it here's the kind of live app

play24:28

that you can go and check out it nailed

play24:30

the prompt to create a fully functional

play24:31

Chrome extension that turns a GitHub

play24:34

repo into a claw prompt so let me know

play24:37

in the comments what you think of what

play24:40

Devon could do CU to me it almost seems

play24:42

like a what is it called a roar shark

play24:44

test right we're all kind of looking at

play24:46

the same thing and we're all seeing just

play24:49

drastically different results different

play24:51

things cuz yeah again some people are

play24:53

like oh it's horrible it's useless right

play24:55

it's zero this will have no effect some

play24:57

people are like oh put everybody out of

play24:59

a job but I feel like those two extremes

play25:02

are not reasonable right I mean it's

play25:04

obviously somewhere in the center it's

play25:06

doing cool stuff it's doing stuff we

play25:08

would think is impossible like 5 years

play25:11

ago and it still kind of sucks at some

play25:13

of it and can't quite figure out how to

play25:14

get around certain issues but this thing

play25:17

isn't released to the public yet right

play25:19

we're judging this thing by right it's

play25:21

like pre relase capabilities of this

play25:24

brand new tech never-before seen

play25:26

technology I don't know comic crazy but

play25:28

I think it's going to have a big impact

play25:30

take a look at the founders uh if you

play25:32

look at who the founders are I mean

play25:34

they're pretty like insanely impressive

play25:37

people take a look at the backers take a

play25:39

look at the invest the investors they're

play25:42

they're people who let's say tend not to

play25:45

lose a lot they they tend to invest in

play25:48

in unicorns and and head it out of the

play25:50

park so I don't know I have no idea

play25:52

what's going to happen but I would not

play25:53

be betting against Devon and finally

play25:56

I'll leave you off with this

play25:58

yet another wearable device and yes I

play26:01

know we're all kind of maybe hesitant to

play26:04

get too excited about these wearable

play26:05

devices this one looks weird but fear

play26:07

not I was concerned too this orange

play26:09

piece in the center I think that's just

play26:11

like a placeholder to make sure that

play26:12

it's you know for shipping uh purposes

play26:15

this does does get removed this is not

play26:17

what your face is going to look like

play26:18

when you wear them I'm putting it

play26:19

towards the end because I know some

play26:20

people don't like me showcasing products

play26:23

they think I'm getting paid to Showcase

play26:24

this I I'm not taking any payments it is

play26:27

interesting to see what's coming a lot

play26:30

of these things will be flops a lot of

play26:32

them won't be good but some of them will

play26:34

be some of them will stick and become

play26:36

part of our everyday lives and I

play26:38

personally always wanted something that

play26:40

could whisper comebacks in your ear if

play26:42

you remember that Seinfeld episode where

play26:44

somebody insults George Costanza and

play26:46

later he's driving down the road and he

play26:48

comes back he comes up with the perfect

play26:50

comeback and so he recreates the

play26:52

scenario days later just so he can get

play26:54

up and say well the jerks are called and

play26:57

they're running out of you and it was

play26:58

the it was supposed to be the perfect

play27:00

comeback it wasn't but my point still

play27:02

stands imagine a life where if somebody

play27:04

says something that's annoying or

play27:06

insulting immediately your AI assistant

play27:09

pops into your ear I believe these are

play27:11

little microphones here on the side

play27:13

these microphones I'm not sure those

play27:15

might be bone conduction microphones so

play27:18

they transmit the vibrations right so

play27:20

your AI assistant Whispers The Perfect

play27:22

comeback in your ear and you just

play27:23

immediately recite it I don't know I

play27:25

would pay for that seems useful make

play27:27

sure you're subscribe to this channel

play27:28

we're rolling out something really cool

play27:31

right now so you don't want to miss it

play27:33

like for real you don't want to miss it

play27:34

I'll leave you off of this quick video

play27:36

from frame the open source AI glasses

play27:39

for developers hackers and

play27:41

superheroes my name is Wes rth and thank

play27:43

you for watching hi I'm bobac co-founder

play27:45

of brilliant labs and today I want to

play27:47

share with you a device that is as

play27:49

simple as it is capable this is

play27:53

frame frame sets theater section um

play27:56

artificial intelligence and M mented

play27:58

reality it is designed to be open source

play28:03

light and affordable and we believe this

play28:05

is important as we enter a new era of a

play28:08

Bonnie

play28:14

daily our in frame is Noah Noah is our

play28:18

fun little AI assistant in the cloud and

play28:20

its job is to orchestrate acrost

play28:23

potentially infinite number of AI models

play28:25

and new models can always be added to

play28:27

his Library either by us or by a TR

play28:31

developer but let's take Noah for a

play28:34

spin you Noah you ready to show people

play28:37

what you're capable

play28:40

of absolutely let's do it Noah can see

play28:44

what I

play28:46

see take a look at what I'm wearing am I

play28:49

dressed appropriately for the

play28:53

weather a Navy Bloom hooded sweatshirt

play28:56

should be suitable for the part Le

play28:58

conditions withl a current temperature

play29:00

of 72

play29:01

F now let's take this one step further

play29:05

find me a matching BD and tell me where

play29:08

I can buy it and how much it's going to

play29:12

cost yes it say you can find a nabe blue

play29:15

beanie at Target Macy's San frano Prices

play29:19

range from $7.99 to $65 depending on the

play29:22

brand and style cool now let's make this

play29:26

actionable for me

play29:28

choose the closest location and tell me

play29:32

the address so I can walk

play29:36

there the closest location is San frano

play29:40

located at 505 to sedo

play29:43

Street great now Noah can also generate

play29:47

images and we thought this would be

play29:49

super fun because gener VII is news

play29:53

especially when it comes to AI being

play29:55

embodied and seeing what we see in the

play29:57

world around

play29:58

so let's try that out I'll just double

play30:01

tap on the side

play30:02

here show me in that navy blue beanie as

play30:07

a spring typ superhero

play30:10

a frame is exiting the factories

play30:14

starting this week and heading toward

play30:17

fulfillment customs in the weeks that

play30:19

follow

Rate This

5.0 / 5 (0 votes)

関連タグ
AI RoboticsFigure AIDr. EurekaSim-to-RealAI EthicsGPT-4Neural NetworksAI TrainingCopyright AIWearable TechAI Assistant
英語で要約が必要ですか?