Trinity of Artificial Intelligence | Anima Anandkumar | TEDxIndianaUniversity

TEDx Talks
11 Dec 201818:01

Summary

TLDRIn this talk, the presenter explores the concept of artificial intelligence, focusing on the Trinity of AI: data, learning algorithms, and computer infrastructure. They discuss the importance of each component in creating task-oriented intelligence and highlight the rapid advancements in image classification due to the ImageNet dataset and deep learning models. The presenter also touches on the future of AI, including multi-dimensional data processing with tensors and the integration of AI with robotics to create instinctive, deliberative, and multi-agent robots.

Takeaways

  • 🤖 The speaker discusses the Trinity of AI, which includes data, learning algorithms, and computer infrastructure as the three main ingredients for AI success.
  • 🧠 Intelligence is defined as the ability to acquire and apply knowledge and skills, contrasting the pre-programmed actions of robots with the improvisational skills of animals like dogs.
  • 📈 The importance of large datasets, such as ImageNet, is highlighted for training AI systems to recognize and categorize images effectively.
  • 💾 The shift from CPU to GPU computing is identified as a key factor in the advancement of AI, with GPUs providing the necessary parallel processing power.
  • 📊 Deep learning models with many layers are crucial for processing complex data like images, allowing AI to learn from examples and improve its performance.
  • 📉 The speaker points out the rapid progress in image classification, noting how AI systems have reached and even surpassed human-level performance on certain datasets.
  • 🔮 Future research directions include multi-dimensional data processing using tensors, which can handle data with more than two dimensions, like videos or text documents.
  • 🤝 The integration of AI (the mind) with robotics (the body) is presented as a challenge and opportunity, aiming to create more instinctive, deliberative, and interactive robots.
  • 🛸 An example of AI in robotics is given, where drones learn to land more effectively through data-driven learning rather than pre-programmed algorithms.
  • 🌪️ The potential for AI to enhance drone performance in adverse conditions, such as different wind patterns, is explored through simulation and data collection.

Q & A

  • What is the Trinity of AI as discussed in the transcript?

    -The Trinity of AI consists of three main ingredients: data, learning algorithms, and computer infrastructure. Data provides the examples or information needed for learning, learning algorithms process the data to extract knowledge, and computer infrastructure provides the necessary computational power to perform these tasks.

  • Why is data considered the most important ingredient in the Trinity of AI?

    -Data is considered the most important ingredient because it provides the examples or observations that an AI system needs to learn from. Without sufficient data, an AI system cannot acquire the knowledge and skills required to perform tasks effectively.

  • How does the image classification task exemplify the success of AI?

    -The image classification task exemplifies the success of AI by demonstrating how AI systems can learn from a large dataset of images, such as ImageNet, to recognize and categorize new images with high accuracy. This progress was made possible by combining data, deep learning models, and powerful computer infrastructure like GPUs.

  • What is the significance of the ImageNet dataset in AI development?

    -The ImageNet dataset is significant because it provided a large and diverse collection of categorized images, which allowed AI systems to learn and improve their ability to recognize and classify images. This dataset was crucial in advancing computer vision and deep learning algorithms.

  • How do deep learning models contribute to AI's ability to perform tasks?

    -Deep learning models contribute to AI's ability to perform tasks by processing input data through multiple layers, allowing the system to learn complex patterns and features. These models can extract relevant information from vast amounts of data, enabling AI systems to perform specific tasks, such as image recognition, with high accuracy.

  • What is the role of GPU computing in AI?

    -GPU computing plays a crucial role in AI by providing the parallel processing capabilities needed to handle the massive computational demands of deep learning algorithms. GPUs enable AI systems to perform billions of operations per image in a highly efficient manner, which is essential for training and deploying large-scale AI models.

  • Why did the speaker mention the Boston Dynamics Atlas robot and a dog attempting a backflip?

    -The speaker mentioned the Boston Dynamics Atlas robot and a dog attempting a backflip to illustrate the difference between pre-programmed actions and true intelligence. While the robot executed a perfect backflip, it did so based on a pre-programmed sequence, whereas the dog, despite failing, demonstrated the ability to learn and adapt, which is a sign of intelligence.

  • What is the challenge with AI systems when it comes to generalizing beyond their training data?

    -The challenge with AI systems when it comes to generalizing beyond their training data is that they may perform well on specific datasets or tasks but struggle to adapt to new or unseen data. This limitation highlights the need for AI systems to develop more robust learning capabilities that allow them to generalize and apply their knowledge to a wider range of situations.

  • How does the concept of tensors relate to multi-dimensional data processing in AI?

    -Tensors are mathematical objects that enable processing in any number of dimensions, generalizing the concepts of vectors and matrices. In AI, tensors are used to handle and analyze multi-dimensional data, such as images, videos, and text, by capturing and processing information across different dimensions, which can lead to more effective learning and understanding of complex data structures.

  • What is the potential future of integrating AI 'mind' with robotics 'body'?

    -The potential future of integrating AI 'mind' with robotics 'body' involves creating robots that are instinctive, deliberative, multi-agent, and behavioral. This integration aims to develop robots that can react and control their reactions in a fine-grained manner, make plans and consider consequences, coordinate and work with others, and interact with humans by understanding emotions and acting as partners.

Outlines

00:00

🤖 Introduction to Artificial Intelligence

The speaker begins by introducing the topic of artificial intelligence (AI) and the concept of the 'Trinity of AI'. They discuss the importance of understanding the three main components that make AI work and how they contribute to the future of AI. The speaker contrasts the public's perception of AI, with both its potential and concerns, and poses the question of what constitutes 'intelligence'. They use two videos to illustrate the difference between programmed actions and true intelligence, highlighting that while a robot may perform a complex task like a backflip, it lacks the ability to learn and adapt, which is a characteristic of intelligent beings like dogs. The speaker emphasizes that AI, as it stands, is more about task-oriented intelligence rather than general intelligence, which would mimic human cognitive abilities.

05:03

🔍 The Trinity of AI: Data, Algorithms, and Infrastructure

The speaker delves into the 'Trinity of AI', which consists of data, learning algorithms, and computer infrastructure. They explain that for AI to learn and perform tasks, it requires a vast amount of data or examples. The speaker uses the example of image classification and the ImageNet dataset, which has been pivotal in revolutionizing computer vision by providing a large collection of categorized images. They then discuss the role of deep learning models in processing this data, highlighting the importance of layers of processing to extract knowledge from examples. Lastly, the speaker touches on the necessity of powerful computer infrastructure, like GPUs, to handle the massive computational requirements of deep learning, using NVIDIA's contributions as an example. The speaker concludes by showing how these three elements combined have led to significant progress in AI, particularly in image recognition tasks.

10:03

📈 Progress in AI: Image Classification and Beyond

The speaker continues by discussing the progress made in AI, particularly in the field of image classification, which has been a significant success story for AI technology. They mention how AI systems have surpassed human performance on the ImageNet dataset, but clarify that this does not mean AI is more intelligent than humans, as it is limited to specific tasks and datasets. The speaker then shifts the focus to the future of AI, discussing the importance of learning in multiple dimensions. They introduce the concept of tensors, which are mathematical objects that can process data across various dimensions, and explain how they can be used to improve AI algorithms. The speaker shares their research on using tensors to categorize text documents, highlighting the Amazon Comprehend tool as an example of this technology in action. They emphasize the potential for multidimensional processing to be applied to a wide range of applications.

15:05

🚀 Integrating AI with Robotics: The Future of Autonomous Systems

In the final paragraph, the speaker looks towards the future of AI, focusing on the integration of AI with robotics, or the 'body' of AI. They discuss the potential for drones to learn from data and improve their performance, such as learning to land more effectively. The speaker also mentions ongoing research at Caltech on teaching drones to fly in adverse conditions, like different wind conditions, by using AI algorithms. The speaker envisions a future where robots are instinctive, deliberative, multi-agent, and behavioral, able to interact with humans and understand emotions, suggesting a bright future for AI and robotics working in tandem.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is discussed as a combination of task-oriented intelligence that can learn from examples and execute specific tasks effectively. The speaker emphasizes the current state of AI, which is far from general intelligence but has made significant progress in specific areas like image classification.

💡Trinity of AI

The Trinity of AI mentioned in the video refers to the three main components necessary for AI to function effectively: data, learning algorithms, and computer infrastructure. The speaker explains that data provides the examples needed for learning, algorithms process this data to extract knowledge, and computer infrastructure provides the necessary processing power. This concept is central to understanding how AI systems are built and operate.

💡Data

Data, in the context of AI, refers to the raw input that AI systems use to learn and make decisions. The video highlights the importance of data by discussing how AI systems require vast amounts of examples to learn tasks, such as image recognition. The speaker uses the example of the ImageNet dataset, which contains millions of categorized images, to illustrate how data is crucial for training AI models.

💡Deep Learning

Deep Learning is a subset of machine learning where algorithms with multiple layers (or 'deep' neural networks) are used to model and understand data with multiple levels of abstraction. The video discusses deep learning models as a key component of AI, where these models process images through multiple layers to extract features and make predictions, such as identifying objects in images.

💡General Intelligence

General intelligence, as mentioned in the video, refers to the ability to understand and learn from a wide range of experiences and apply that knowledge to various tasks. The speaker contrasts this with the current state of AI, which is more task-specific and lacks the broad cognitive abilities of general intelligence. The video suggests that while AI has made strides in specific areas, achieving general intelligence remains a significant challenge.

💡Image Classification

Image classification is the task of labeling images with categories or tags based on their content. In the video, this is presented as one of the most visible successes of AI, where deep learning models have been trained on large datasets like ImageNet to recognize and categorize objects within images. The speaker discusses the progress made in this area and how it exemplifies the application of the Trinity of AI.

💡GPU Computing

GPU Computing refers to the use of Graphics Processing Units (GPUs) for general-purpose processing, beyond just graphics rendering. In the video, the speaker explains how traditional CPUs have reached their limits in terms of processing power for AI tasks, and GPUs have become essential for handling the massive parallel computations required by deep learning models, thus enabling large-scale AI applications.

💡Tensors

Tensors are multi-dimensional arrays of numbers that are used in various areas of mathematics and physics, and have become fundamental in deep learning for representing and processing multi-dimensional data. The video discusses how tensors are used to handle multi-dimensional data efficiently, allowing AI systems to learn from complex data structures and improve their performance in tasks such as text categorization.

💡Autonomous Systems

Autonomous systems are systems that can operate independently without human intervention. The video touches on the integration of AI with robotics to create autonomous systems, such as drones that can learn to land better or navigate in adverse conditions. The speaker's research at Caltech aims to bridge the gap between AI as the 'mind' and robotics as the 'body' to create more capable and intelligent machines.

💡Multi-agent Systems

Multi-agent systems are systems composed of multiple interacting intelligent agents. In the video, the speaker discusses the future of robotics where robots are not only instinctive and deliberative but also capable of coordinating and working together with other robots or humans, similar to how humans function in society. This concept is part of the broader vision for creating more sophisticated and interactive AI systems.

Highlights

The Trinity of AI consists of data, learning algorithms, and computer infrastructure.

Intelligence is defined as the ability to acquire and apply knowledge and skills.

Current robots, like the Boston Dynamics Atlas, lack intelligence as they are pre-programmed.

Image classification is a significant success of AI, demonstrating the Trinity of AI in action.

The ImageNet dataset has been pivotal in revolutionizing computer vision with over 14 million categorized images.

Deep learning models with many layers of processing are essential for image recognition.

GPU computing, led by NVIDIA, enables the vast amounts of parallel processing required for deep learning.

AI systems have reached human-level capabilities in image recognition on the ImageNet dataset.

AI's ability to generalize beyond specific datasets remains a significant challenge.

Research is exploring multi-dimensional data processing using mathematical objects known as tensors.

Amazon Comprehend uses tensor-based algorithms to categorize text documents without predefined topics.

The future of AI involves integrating 'mind' (intelligence) and 'body' (robotics) for more autonomous systems.

Caltech's Center for Autonomous Systems and Technology works on merging AI with robotics.

Drones can learn from data to land more efficiently, showcasing the potential of AI in robotics.

Future robots should be instinctive, deliberative, multi-agent, and behavioral to interact effectively with humans.

The talk concludes with a vision of a future where AI and robotics are seamlessly integrated.

Transcripts

play00:02

[Music]

play00:10

today I'm gonna talk about artificial

play00:13

intelligence more specifically the

play00:16

Trinity of AI what are the three main

play00:19

ingredients that make AI happen and

play00:22

what's the fabulous future we are headed

play00:26

it when it comes to AI and I'll combine

play00:29

both the academic perspective I have

play00:32

from Caltech as well as the industry

play00:34

perspective from Nvidia to see how there

play00:38

are so many exciting opportunities for

play00:41

industry academic collaborations to come

play00:44

together and realize the dream of AI but

play00:49

before we get there what is a I write

play00:52

I'm sure everybody here in the audience

play00:55

has seen AI in the news has seen both

play00:59

you know amazing progress that AI has

play01:02

done but also some dystopian future for

play01:06

AI that people are worried about but

play01:09

given that it's so much in the news the

play01:12

first question we should be asking is

play01:14

what is intelligence before we get to

play01:18

artificial intelligence what do we even

play01:20

mean by intelligence right so for this I

play01:24

want to show you two videos and I want

play01:26

you to think which one of the two is

play01:29

more intelligent so let's see that so

play01:36

the first one you see is the Boston

play01:38

Dynamics Atlas robot right that's a

play01:42

pretty cool backflip now what about the

play01:45

next one sir game it's our best friend

play01:56

it's also trying to do a backflip

play02:04

so in the first case you see the robot

play02:07

perfectly executing the backflip in the

play02:11

second case you see it failing pretty

play02:14

clumsily so what do you think is more

play02:17

intelligent the robot or the dog

play02:21

yeah it's the dog that's intelligent and

play02:24

the robot actually has zero intelligence

play02:28

the way we have it today

play02:31

so why is that because how do we define

play02:35

intelligence the intelligence is the

play02:38

ability to acquire and apply knowledge

play02:40

and skills right it was the robot doing

play02:44

that no right at least the way it is

play02:48

currently it was completely

play02:50

pre-programmed even though the bakflip

play02:53

looks so impressive it was all planned

play02:56

beforehand and the program was built

play02:59

into the robot on the other hand the dog

play03:02

is improvising it's always observing its

play03:05

environment maybe one day it'll also

play03:06

learn to do the backflip better and so

play03:09

the dog is intelligent while our current

play03:12

robots are mostly not and so there you

play03:16

see a very big gap of where we are when

play03:19

it comes to building intelligence into

play03:22

our robots into our systems and indeed

play03:26

when we think about general intelligence

play03:29

the gap is even wider so general

play03:31

intelligence refers to have a human

play03:34

level cognitive ability and artificial

play03:38

systems are so far from it so when I

play03:41

refer to AI or artificial intelligence I

play03:45

usually mean some task oriented

play03:47

intelligence it's the ability to execute

play03:50

a specific task very well right to learn

play03:53

from examples and to execute a given

play03:56

task and that's what I'll show you how

play04:00

we are making progress on and still the

play04:03

vast challenges that lie ahead

play04:05

so given that we got the definitions out

play04:09

of the way let's now see what is the

play04:11

Trinity of AI so for AI to be successful

play04:15

I

play04:16

see three important ingredients the very

play04:19

first one and perhaps the most important

play04:22

one is the data right so when I say data

play04:24

I mean

play04:25

examples you know like how can a system

play04:29

learn it needs information observations

play04:32

or data and so once it has it there are

play04:37

learning algorithms which can process

play04:39

the data and extract knowledge from

play04:42

those examples but the third important

play04:46

ingredient is the computer

play04:48

infrastructure all this processing needs

play04:50

a substrate how do we do this vast

play04:53

amounts of processing so we need all

play04:56

these three to come together for an AI

play04:59

task to be successful let's now see an

play05:02

example application and see how the

play05:05

progress came about by putting this

play05:08

Trinity together so the image

play05:11

classification task has been perhaps the

play05:13

most visible success of AI today as you

play05:16

see here in the picture there is the

play05:19

pool there are plans so there's a lot of

play05:22

things in this image and as human beings

play05:25

we find it's so natural to absorb and

play05:28

reason about the entire image right but

play05:31

it's really hard for a machine to do

play05:34

that because it has to see lots of

play05:37

variations of a swimming pool before it

play05:39

realizes oh this is a swimming pool

play05:41

right so it needs a lot of examples to

play05:44

learn what a swimming pool looks like

play05:46

and that's where we have the first

play05:49

ingredient in our Trinity that is the

play05:51

data how do we get lots of examples of

play05:55

images for different categories so the

play05:59

image net data set has been now has

play06:03

revolutionized how we've done computer

play06:06

vision and that's because for the first

play06:08

time we had such a large collection of

play06:11

categorised images this is more than 14

play06:15

million images and more than thousand

play06:17

categories and if you even look at one

play06:20

example category here you see all the

play06:23

variations of how natural images come

play06:27

about for instance when you say fish

play06:30

you know you just see so many different

play06:32

variations of how a fish can occur in an

play06:36

image and so the learning algorithm

play06:39

needs to have access to all these

play06:42

different variations for it to recognize

play06:44

what a fish looks like in a new example

play06:49

so that was the first ingredient the

play06:52

data the second ingredient are the

play06:55

algorithms and the models how do we take

play06:58

these images and extract the information

play07:02

so that an AI system can automatically

play07:05

categorize images in a seamless way and

play07:11

that's where we have the deep learning

play07:13

models so the term deep refers to the

play07:17

fact that there are many layers of

play07:19

processing so you transform your input

play07:23

image across these layers and in the

play07:26

output you want an answer

play07:28

you want to save with how much

play07:30

confidence do you think there is a dog

play07:33

in this image and if you have a good

play07:35

algorithm and you've trained your system

play07:38

well hopefully you will have a high

play07:40

confidence that this particular image

play07:42

has a dog in it right and so you need a

play07:46

model that's highly flexible that can

play07:49

learn from millions of examples and

play07:52

extract the relevant information to

play07:55

realize what looks like a dog

play07:59

so those were the two ingredients the

play08:01

third one is the computer infrastructure

play08:05

so if you take these current deep

play08:07

learning systems it is just not scalable

play08:11

to run it on a normal computer so the

play08:15

computers of yesteryear so the CPUs had

play08:20

a wave of progress but now the curve of

play08:23

their growth is plateauing and so the

play08:26

new form of computing is known as the

play08:28

GPU computing it stands for graphics

play08:31

processing unit because you know it goes

play08:34

back to how graphics was processed in a

play08:37

parallel manner but the same form of

play08:40

technology is also highly relevant for

play08:42

processing a

play08:43

because processing in these deep

play08:46

learning layers requires more than

play08:49

billion operations for each image but

play08:53

they can be done in a highly parallel

play08:54

manner and hence NVIDIA GPUs have been

play08:58

at the forefront of enabling computation

play09:02

at scale and have been a crucial

play09:04

ingredient to realize this dream of

play09:07

large-scale AI so now you see how all

play09:11

these three ingredients came together so

play09:15

we had the data we have the deep

play09:18

learning model with learning algorithms

play09:20

and then we had the GPU computing and

play09:23

when they came together it created the

play09:28

steep learning revolution we were able

play09:31

to make progress in such a short length

play09:33

of time so what you see is the errors of

play09:37

that an AI system makes in recognizing

play09:41

the image what are the categories in an

play09:44

image over this image net dataset and

play09:47

you see within a span of few years we

play09:51

could get to human level capability on

play09:53

this data set and so to have this

play09:57

capability on such a large dataset in

play09:59

such a short amount of time was what was

play10:03

surprising to so many scientists and so

play10:06

that shows that there is a lot of

play10:09

promise that a I can still realize in

play10:12

many other domains I do want to point

play10:16

out one thing though if you see in 2015

play10:19

on the image net data set the AI system

play10:23

is doing better than a human being

play10:26

so does this mean the AI system got more

play10:29

intelligent than a human no that's not

play10:33

the case right because it was it's only

play10:35

doing better on only this data set and

play10:38

that's always been a challenge with AI

play10:42

if you have to now take it beyond to

play10:45

other data sets or to other tasks so we

play10:49

need the ability to generalize beyond

play10:51

what it's seen in a current data and

play10:53

that still remains in this case as well

play10:59

so that is great we were able to make

play11:02

fast progress but where do we go next so

play11:06

my research looks at how to do learning

play11:10

in many dimensions so what are

play11:13

dimensions mean for data so if you think

play11:17

of an image you have the width and the

play11:19

height of an image and then you've the

play11:22

colored image you have the number of

play11:24

channels now if you took take a video

play11:27

you also have the time dimension so when

play11:31

you collect rich forms of data you have

play11:34

all many dimensions coming together for

play11:38

instance you may also have texts that is

play11:40

connected to the video you may have

play11:42

other forms of data that are all

play11:44

processed together so how do we merge

play11:47

all this very efficiently and process

play11:50

them at scale so the idea is we don't

play11:54

want to remove or throw away the

play11:57

dimensions in our data if you do that we

play12:00

can potentially lose information to help

play12:03

you visualize this you know you have the

play12:05

three-dimensional cube here and if you

play12:08

just look at its projection on the wall

play12:10

you do not see the complete picture

play12:13

right and it's the same with data it's

play12:16

if you're not processing it effectively

play12:20

and not incorporating the information

play12:23

from all different dimensions you may

play12:25

potentially not well learn very well

play12:28

what my research does is to use

play12:32

mathematical objects known as tensors to

play12:35

enable such multi-dimensional processing

play12:38

so tensors or mathematical objects that

play12:41

do processing in any dimension and

play12:44

generalize the concept of vectors and

play12:47

matrices that you probably see in an

play12:49

undergraduate class so these may seem

play12:52

like complicated mathematical objects

play12:55

but you can realize them into real

play12:58

algorithms for many applications so one

play13:02

application I've looked at is how to

play13:04

automatically categorize text documents

play13:07

so if you have many millions of

play13:10

documents

play13:11

can you quickly say what are the topics

play13:14

discussed in various documents right and

play13:18

you feel don't even know what the

play13:20

underlying topics are you don't have

play13:22

examples of them when you're learning

play13:24

it's even a harder task but we have a

play13:28

tool that I built with my team at Amazon

play13:31

called Amazon comprehend that precisely

play13:34

does this and what enables this

play13:38

algorithm was the concept of tensors so

play13:42

intuitively what it's looking at are the

play13:45

relationships of words in a document so

play13:49

if there are words that are commonly

play13:51

occurring together that's referred to a

play13:54

topic you should be able to extract that

play13:57

information and to do that at scale over

play14:00

millions of documents requires more

play14:03

thinking but that's precisely what this

play14:06

algorithms are able to do they can take

play14:09

this Corcoran's tensor of Triplets of

play14:11

words and decompose them to obtain

play14:14

topics that are present in different

play14:17

documents and hence you can translate

play14:20

mathematical concepts into actual

play14:23

algorithms and then deploy them at scale

play14:26

to make an impact on various

play14:28

applications so I see a lot of

play14:31

possibilities in future to take this

play14:34

multi-dimensional processing into many

play14:37

different applications so what's another

play14:41

step for the future so I want to think

play14:46

about both the mind and the body so far

play14:51

I only talked about AI as the mind right

play14:54

how do we create intelligence how do we

play14:57

train a an AI system but there's also

play14:59

the body which is the robotics so I

play15:02

showed you in the beginning the Atlas

play15:04

robot that currently is not intelligent

play15:07

if you want to change that we have to

play15:09

bring the two together and at Caltech we

play15:12

have the Center for autonomous systems

play15:14

and technology that aims to do that so

play15:18

I'll show you one example we just

play15:20

started working on which was to ask can

play15:24

we

play15:25

have the drones learn to land better so

play15:30

instead of a pre-programmed algorithm

play15:32

can the drone learn from lots of data

play15:34

you know lots of sensors that it's

play15:37

outfitted with and Bill learning to land

play15:40

in a better way so let's see what was

play15:43

the outcome so on the left you see the

play15:46

draw and when there is no learning it's

play15:49

a program controller and in the right

play15:52

there is the learning and if you see on

play15:55

the right the drone landed whereas it's

play15:58

was hovering around in a pre-programmed

play16:01

controller so this means you know there

play16:05

is a lot of potential that learning can

play16:07

provide to make drone flights more

play16:11

efficient and ultimately autonomous

play16:14

another project we are currently

play16:17

thinking of is how can we have drones

play16:21

I'll fly in adverse conditions and here

play16:26

we can simulate wind conditions of all

play16:29

different forms in the drone testing

play16:31

laboratory at Caltech and then you

play16:34

collect that data to design AI

play16:37

algorithms that can be more effective in

play16:41

countering such wind conditions if you

play16:44

want to bring the mind at the body

play16:46

together you want to build all different

play16:49

forms of intelligence and meld them

play16:53

together with the body so we want the

play16:55

robots of your future to be instinctive

play16:58

right so humans have instinct inbuilt in

play17:01

us which is the ability to react and

play17:05

control our reactions in a fine-grained

play17:08

manner how do we do that in a robot

play17:11

we're also deliberative we make plants

play17:15

we look at consequences

play17:17

how do we put do that in a robot it's

play17:20

also multi-agent we are a society we

play17:23

coordinate and work together with others

play17:25

how do we get the robots to do the same

play17:28

and ultimately the last one perhaps the

play17:32

most difficult one is behavioral how can

play17:35

we have robots that can interact

play17:39

with us that can understand human

play17:41

emotions ultimately be a partner with us

play17:43

and that's perhaps you know gonna be a

play17:47

bright new future for us thanks so much

play17:59

you

Rate This

5.0 / 5 (0 votes)

Связанные теги
Artificial IntelligenceData ScienceDeep LearningMachine LearningRoboticsGPU ComputingImage ClassificationTensor ProcessingAutonomous SystemsCaltech Research
Вам нужно краткое изложение на английском?