The Basics of Neuromorphic Computing

Joy
10 Apr 202224:45

Summary

TLDROrange Banerjee's seminar presentation delves into neuromorphic computing, a technology mimicking the human brain's efficiency. Discussing its origins from the von Neumann architecture to the current advancements, Orange highlights the potential of neuromorphic systems in AI and space missions. He introduces key concepts like memristors and compares neuromorphic chips like IBM's TrueNorth and Intel's Loihi, emphasizing their energy efficiency and processing power. Challenges in designing and programming these systems are also addressed, questioning if our current understanding of the brain is sufficient to fully harness neuromorphic computing's potential.

Takeaways

  • 🎓 Orange Banerjee, a student at Bennett University, is presenting on the topic of neuromorphic computing.
  • 🧠 Neuromorphic computing aims to create hardware that mimics the neurobiological architectures present in the human nervous system.
  • 💡 The concept of neuromorphic computing was invented by Carver Mead in the 1980s, focusing on VLSI systems to replicate brain-like functions.
  • 🚀 The von Neumann architecture, which separates memory and CPU, is a bottleneck for AI and machine learning advancements compared to the brain's integrated approach.
  • 🔋 Neuromorphic systems are more energy-efficient compared to traditional computing, which is crucial as technology advances and energy demands increase.
  • 🌐 Moore's Law predicts the exponential growth of transistors on microchips, but this growth could lead to an unsustainable energy consumption for von Neumann architectures.
  • 🤖 Neuromorphic computing holds promise for AI by potentially making supercomputers faster and enabling space operations with adaptable, learning systems.
  • 🔗 IBM's TrueNorth and Intel's Loihi are examples of neuromorphic chips that have been developed to process information more efficiently.
  • 🛠 The design and analysis of neuromorphic systems present challenges, including the need for new programming languages and hardware innovations.
  • 🤔 There are still many unknowns in neuromorphic computing, such as the replication of human emotions and the full complexity of the brain's functions.

Q & A

  • What is the main topic of Orange Banerjee's seminar presentation?

    -The main topic of Orange Banerjee's seminar presentation is neuromorphic computing.

  • Which university is Orange Banerjee pursuing his V-Tech in CSE from?

    -Orange Banerjee is pursuing his V-Tech in CSE from Bennett University.

  • What is the von Neumann architecture and why is it significant in the context of neuromorphic computing?

    -The von Neumann architecture is a computer architecture where the CPU and memory are separate, which leads to a bottleneck in data transfer. It's significant in neuromorphic computing because it contrasts with the human brain's architecture, which is more efficient and does not have such a bottleneck.

  • Who invented neuromorphic computing and what is its main principle?

    -Neuromorphic computing was invented by Carver Mead in the 1980s. Its main principle is to create integrated circuits that replicate or mimic the neurobiological architecture present in the human nervous system.

  • What is Moore's Law and how does it relate to neuromorphic computing?

    -Moore's Law states that the number of transistors on a microchip doubles every two years, and the cost of computers is halved. It relates to neuromorphic computing as it highlights the exponential growth of technology, which is challenged by the energy inefficiency of traditional architectures, making the energy-efficient neuromorphic systems more appealing.

  • Why is there a need for neuromorphic systems according to the presentation?

    -There is a need for neuromorphic systems because traditional von Neumann architecture is energy-hungry and faces limitations in processing power and efficiency, especially when compared to the human brain's capabilities.

  • What is a memristor and how does it relate to neuromorphic computing?

    -A memristor is an electrical device that remembers the amount of current or voltage that has passed through it. It is crucial in neuromorphic computing because it can mimic the synaptic behavior found in the human brain, allowing for the creation of artificial synapses.

  • What is the potential impact of neuromorphic computing on space operations?

    -Neuromorphic computing can make space missions more efficient by reducing the need for ground mission teams, as it allows space vehicles to adapt and learn according to their environment, thus requiring less power and processing capabilities.

  • What are some of the challenges faced in developing neuromorphic systems?

    -Challenges in developing neuromorphic systems include designing and analyzing the structure, creating new programming languages, and developing new generations of memory storage and sensor technologies.

  • What are the two neuromorphic systems mentioned in the presentation, and what are their key differences?

    -The two neuromorphic systems mentioned are the Neural Grid and IBM's TrueNorth. The Neural Grid uses sub-threshold analog logic and is smaller in scale, while TrueNorth is larger, more efficient, and uses digital memristor devices instead of traditional VLSI systems.

  • How does the efficiency of the human brain compare to modern supercomputers in terms of processing power?

    -The human brain is significantly more efficient than modern supercomputers. It can process around 10^18 floating points per second on just 20 watts of power, making it five times faster than the world's largest supercomputer, which is IBM's Summit.

Outlines

00:00

🧠 Introduction to Neuromorphic Computing

Orange Banerjee, a student at Bennett University pursuing a degree in CSE, introduces the topic of neuromorphic computing in a seminar presentation. He expresses his interest in AI and machine learning and acknowledges the support from his university and faculty. The presentation delves into the history of computing, highlighting the von Neumann architecture as the foundation of modern computer systems. Neuromorphic computing is presented as a revolutionary approach that mimics the neurobiological architecture of the human nervous system using VLSI systems. The speaker emphasizes the need to understand the brain's workings to advance in this field and discusses the limitations of the von Neumann architecture, such as the separation of CPU and memory, which contrasts with the brain's integrated processing and memory. The presentation also touches on Moore's Law and its implications for the future of computing, particularly the energy demands that could outstrip global energy budgets.

05:02

🔋 Energy Efficiency and Neuromorphic Systems

The second paragraph discusses the inherent latency and bottleneck in traditional computing architectures due to the separation of processing and memory units. It contrasts this with the brain's efficiency, where processing and memory are co-located, reducing latency. The human brain's remarkable energy efficiency is highlighted, noting that it operates on a mere 20 watts of power, which is comparable to the energy used by a cup of coffee. The speaker emphasizes the need for neuromorphic systems that can replicate the brain's efficiency and processing capabilities. An example of a neuron and an artificial neuron is used to illustrate the concept of data transfer through ions and the potential for creating artificial synapses using memristors, which have memory properties. The paragraph concludes with a call to explore the vast field of neuromorphic computing and the potential for creating systems that mimic the human brain's functionality.

10:02

🚀 Neuromorphic Computing for Space and AI

In the third paragraph, the speaker explores the potential applications of neuromorphic computing, particularly in space operations and artificial intelligence. Neuromorphic systems are highlighted for their adaptability and flexibility, which are currently unmatched by traditional AI systems. The potential for neuromorphic computing to make supercomputers faster and more efficient in space missions is discussed, along with the ability to reduce the need for ground mission teams. The speaker also touches on the importance of reducing data processing requirements and the potential for neuromorphic chips to revolutionize space exploration. The paragraph concludes with a discussion of the architecture of neuromorphic systems, emphasizing the need for memristors and the stacking of these components to create integrated circuits that can mimic the brain's structure and function.

15:05

💡 Neuromorphic Chips and Their Development

The fourth paragraph delves into the development of neuromorphic chips and their capabilities. The speaker mentions the progress made in the field, with companies like IBM and Intel investing in neuromorphic technology. IBM's TrueNorth chip, introduced in 2014, is highlighted for its 64 million neurons and 16 billion synapses, making it a significant milestone in neuromorphic computing. Intel's Loihi chip, introduced in 2020, is noted for its efficiency and potential to scale up to 100 million synapses. The speaker compares the processing power of neuromorphic systems to that of the human brain and supercomputers, emphasizing the potential for neuromorphic systems to outperform current technology. The paragraph also discusses the architecture of neuromorphic chips and the challenges of integrating them into existing systems.

20:07

🛠 Challenges and Future of Neuromorphic Computing

The final paragraph addresses the challenges and future prospects of neuromorphic computing. The speaker notes the difficulties in designing and analyzing neuromorphic systems, suggesting that new programming languages and hardware may be required. The limitations of current neuromorphic designs are discussed, particularly the lack of consideration for the full complexity of the human brain, such as glial cells and emotions. The speaker questions whether we have enough understanding of the brain to replicate its functions effectively. The paragraph concludes with a reflection on the potential of neuromorphic computing to revolutionize technology and the need for further research and development in this field.

Mindmap

Keywords

💡Neuromorphic Computing

Neuromorphic computing is a concept in computer science that involves designing and building computers or computer systems that mimic the neurobiological architectures present in the human nervous system. In the video, this term is central as it refers to the main topic of the presentation. The speaker discusses how neuromorphic computing aims to create solid-state devices that operate similarly to the human brain, which could revolutionize fields like artificial intelligence and machine learning.

💡Von Neumann Architecture

The Von Neumann architecture is a computer design model that separates the processor and memory, requiring data to be transferred between the two. This architecture is contrasted in the video with the more integrated approach of neuromorphic computing. The speaker points out that the Von Neumann bottleneck is a limitation for the development of advanced computing systems, as it does not efficiently replicate the parallel processing capabilities of the human brain.

💡Moore's Law

Moore's Law is the observation that the number of transistors on a microchip doubles approximately every two years, which historically has meant a corresponding increase in computing power. In the video, the speaker references Moore's Law to highlight the exponential growth of technology and the challenges it poses for energy consumption in traditional computing architectures, emphasizing the need for more energy-efficient computing models like neuromorphic computing.

💡Memristor

A memristor is an electrical component that can remember the amount of charge that has passed through it, effectively acting as a non-volatile memory element. In the context of the video, memristors are crucial for creating artificial synapses in neuromorphic computing, as they allow for the replication of the memory function of biological neurons. The speaker mentions that many advancements in neuromorphic computing, such as those by HP and IBM, are based on memristor technology.

💡Artificial Neural Networks

Artificial neural networks are computational models inspired by the human brain that are used in machine learning to recognize patterns and solve complex problems. The video discusses how stacking artificial neurons can lead to sophisticated tasks like face detection and classification. The speaker uses the example of an artificial neural network to illustrate the concept of machine learning and how it relates to the goal of neuromorphic computing.

💡CMOS Operations

CMOS operations refer to the digital logic design process used in modern microchips. The video mentions that by 2040, the energy required for CMOS operations in a Von Neumann architecture could be as high as 10^27 joules, which is a significant portion of the world's current energy budget. This highlights the need for more energy-efficient computing paradigms like neuromorphic computing.

💡Energy Efficiency

Energy efficiency in computing refers to the ability of a system to perform tasks with minimal energy consumption. The video emphasizes the energy efficiency of neuromorphic systems, comparing it to the human brain, which operates on about 20 watts of power. This efficiency is a key selling point for neuromorphic computing, as it could lead to more sustainable and powerful computing technologies.

💡IBM's TrueNorth

IBM's TrueNorth is a neuromorphic chip that debuted with 64 million neurons and 16 billion synapses. It is mentioned in the video as a significant milestone in the development of neuromorphic computing. The speaker highlights TrueNorth as an example of how neuromorphic systems are being developed to mimic the brain's architecture and efficiency.

💡Intel's Loihi

Intel's Loihi, referred to in the video as 'LUI', is another example of a neuromorphic chip designed to mimic the human brain's architecture. The speaker discusses how Loihi uses 64 chips to create a system of 8 million synapses, which is more efficient and faster than IBM's TrueNorth. This showcases the progress and competition in the field of neuromorphic computing.

💡Synapse

In the context of neuromorphic computing, a synapse refers to the connection points between artificial neurons, analogous to the biological synapses in the human brain. The video explains the importance of creating artificial synapses to replicate the brain's communication and data processing mechanisms. The speaker uses the concept of synapses to illustrate how neuromorphic systems aim to emulate the brain's efficiency.

Highlights

Introduction to neuromorphic computing by Orange Banerjee from Bennett University.

Neuromorphic computing aims to create devices that work like the human brain.

Historical context of computing architectures, including the von Neumann architecture.

Invention of neuromorphic engineering by Carver Mead in the 1980s.

Explanation of how neuromorphic computing mimics neurobiological architectures.

Moose Law and its implications for the growth of transistors and energy efficiency.

The limitations of von Neumann architecture and the need for neuromorphic systems.

Comparison of energy efficiency between the human brain and modern computers.

The concept of artificial neurons and how they mimic the structure of biological neurons.

Role of memristors in creating artificial synapses for neuromorphic computing.

Potential applications of neuromorphic computing in space operations and supercomputing.

Development of neuromorphic chips by companies like IBM and Intel.

IBM's TrueNorth chip with 64 million neurons and 16 billion synapses.

Intel's Loihi chip and its efficiency compared to IBM's TrueNorth.

Challenges in designing and analyzing neuromorphic systems.

The question of whether we know enough about the brain to replicate its functions in neuromorphic computers.

Closing remarks and thanks for attending the seminar.

Transcripts

play00:00

[Music]

play00:05

welcome to the seminar presentation my

play00:07

name is orange banerjee and i'm pursuing

play00:09

v-tech in cse from bennett university

play00:12

well bennett university is a private

play00:13

institution situated at greater noira

play00:16

today's topic for presentation is

play00:19

neuromorphic computing before moving

play00:21

forward i would like to thank my

play00:22

university and miss gagandeep kaur our

play00:25

faculty for this subject in the semester

play00:28

for giving me this opportunity to come

play00:30

forward and explain something that i

play00:31

really wanted to explore into

play00:33

well i'm really interested in the very

play00:36

concept of artificial intelligence and

play00:38

machine learning hence neuromorphic

play00:40

computing itself is a very unique

play00:43

and

play00:44

it's a topic that is not known by many

play00:46

people so today my job here is to

play00:49

explain you what exactly is neuromorphic

play00:52

computing

play00:53

if we go back in time

play00:55

and try to realize how everything came

play00:58

into this picture

play00:59

or how we came to know about

play01:01

neuromorphic computing then we have to

play01:03

go back in 1945 or the fact that each

play01:07

and every computer power and advances

play01:09

were based on an architecture called the

play01:11

von neumann architecture developed by

play01:14

john von neumann and others

play01:16

it was written on an unfinished paper

play01:19

and ironically the most impressive

play01:22

revolution in the history of technology

play01:24

was done on basis on a half century old

play01:28

design on an unfinished paper

play01:31

well neuromorphic computing also known

play01:33

as neuromorphic engineering was invented

play01:36

by carbon med in the 1980s he talked

play01:39

about the use of vlsi systems well that

play01:42

is nothing but creating an integrated

play01:45

circuit by combining millions of

play01:48

transistors on a single chip

play01:50

that consists of electronic analog

play01:53

systems that replicate or mimic

play01:55

neurobiological architecture present in

play01:58

the human nervous system

play02:00

so

play02:01

in summary what i'm trying to say here

play02:03

is that neuromorphic computing is

play02:05

basically creating

play02:07

a solid state device like our laptops

play02:11

are

play02:12

which would exactly work like our brain

play02:15

and in order to do that we need to

play02:17

understand as we move forward how our

play02:20

brain works and how exactly can we do

play02:22

that

play02:23

well coming to the moon's law moose law

play02:25

as mentioned by god mode is nothing but

play02:27

the number of transistors on a microchip

play02:29

would double every two years and that

play02:31

the cost of computer will be halved it

play02:34

has nothing to do with neuromorphic

play02:35

computing but i'll tell you why i'm

play02:36

getting here now the fact that

play02:39

all everything is on uh the von neumann

play02:42

architecture well now i've been

play02:45

mentioning it too many times and i'll

play02:47

tell you what exactly it is

play02:49

in case of a von neumann architecture we

play02:51

have a memory and a cpu and the

play02:53

bottleneck is that we need to transfer

play02:56

um

play02:57

data

play02:58

through these things

play02:59

through these two

play03:01

portions or sectors as you can see the

play03:02

cpu and the memory however that's not

play03:05

how our brain works our brain does not

play03:08

transfer data from we don't have a cpu

play03:10

here and a memory here that transfers

play03:12

data right it's much more complex than

play03:14

that so yeah so what von neumann does is

play03:17

that it restricts

play03:19

the development of these machines um

play03:22

into something that would have higher

play03:25

implications

play03:26

um

play03:27

coming into neural networking or

play03:29

artificial neural networks or the very

play03:31

concept of artificial intelligence right

play03:33

so the fact is can we change the

play03:36

substrate uh in the case of von neumann

play03:39

architecture and make something brain

play03:40

like another thing is that why i'm

play03:42

mentioning moose law is that the one

play03:44

human architecture is very energy hungry

play03:47

as mentioned by moose law is that the

play03:49

number of transistors in a microchip

play03:51

would drop to double every two years it

play03:53

basically speaks about the exponential

play03:56

growth of

play03:57

uh

play03:58

of our technology in our recent

play04:00

generations

play04:01

by 2040 however it has been said that we

play04:05

would require 10 to the power 27 joules

play04:09

of energy to work on a von neumann

play04:12

architecture to do cmos operations now

play04:15

what is cmos operations it is nothing

play04:17

but the operations that we are doing at

play04:19

this current moment on every laptop

play04:22

computers whatever that you take

play04:24

whatever that comes into your mind right

play04:26

so that is the problem here and that's

play04:29

why i'm trying to tell you that the

play04:30

moon's law is true however

play04:33

that's what poses a huge threat

play04:36

in the coming years

play04:37

as

play04:39

10 to the power 27 joules is currently

play04:42

the budget of the entire world's energy

play04:45

so you can understand the extent till

play04:46

which what we are talking about here

play04:48

that brings me to the point as to why do

play04:50

we need moneyomorphic systems the entire

play04:52

explanation

play04:53

of the energy that requires the von

play04:56

neumann uh the cmos operation

play04:59

requires is the main reason as to why we

play05:01

need neuromorphic systems right so the

play05:04

thing is that uh if we if you try to

play05:07

understand that there will be always be

play05:09

a bottleneck or an inherent latency

play05:12

right so there is this inherent latency

play05:14

in the bond movement architecture for

play05:16

the transfer of data to the cpu and the

play05:18

memory right so the question here is the

play05:22

issue here is why can't we

play05:24

co-locate or why can't we have the

play05:26

processor as well as uh the memory uh

play05:30

together in one place

play05:32

right so that that's what our brain does

play05:35

right it has the processor and the

play05:37

memory and it's at a single place

play05:40

and it does the work unified right

play05:43

so

play05:44

again our brain is also very energy

play05:47

efficient you need to understand that

play05:49

just a banana or like a fruit or a

play05:52

coffee that you drink in your daily

play05:54

lives would fire you up right would

play05:56

charge you up and you would be ready to

play05:58

do something uh something really

play06:00

tiresome right

play06:02

and

play06:02

this thing

play06:04

works on 20 watts just that so if i'm

play06:08

having a cup of coffee for example

play06:10

and then i'll be fired up and ready to

play06:12

do things like

play06:14

uh face detection identification

play06:17

and

play06:18

many such things that take number of

play06:20

cpus to train a machine learning model

play06:23

right

play06:24

and

play06:25

that is

play06:26

that's where the fact that we need

play06:28

neuromorphic systems because on many

play06:30

levels we can beat a computer our brain

play06:34

can do that

play06:36

so

play06:37

how can we make a computer that works

play06:38

like a human brain right and moving any

play06:41

further forward i like to mention that

play06:43

i can only scratch the surface of this

play06:46

very field i can only give you the upper

play06:48

layer of understanding of what or how

play06:52

our neuromorphic computing works exactly

play06:54

because it's a very very vast and wide

play06:57

field to be honest let me give you an

play06:59

example there was a research paper or a

play07:01

survey that was done it was around 15 to

play07:04

20 pages and the number of references on

play07:06

it were 2

play07:08

682

play07:09

that's right that's how big of a or wide

play07:12

of a field it is

play07:13

so moving forward how does uh how to how

play07:16

do we make a computer that works like a

play07:18

human brain in order to understand that

play07:20

we have to understand how a human brain

play07:23

works so how

play07:24

this exactly works right so let me give

play07:27

you an example right here so this is

play07:30

nothing but to make you understand

play07:32

visually the structure of a typical

play07:33

neuron and

play07:35

uh artificial neuron and i'll try to

play07:37

explain what it is but before moving on

play07:39

to that

play07:40

i'll tell you

play07:42

our brain consists of neurons and

play07:43

synapses we all know that

play07:45

and the gaps between these neurons are

play07:48

called synapses

play07:50

and what happens is that data has been

play07:53

transferred or these neurons communicate

play07:55

with each other through

play07:57

what we call neurotransmitters

play08:00

and how do these work right we need to

play08:02

know we these work on something called

play08:05

iron flow

play08:06

now ions what are ions these are nothing

play08:09

but charged atoms so we've got the

play08:10

potassium ion the chloride ion and

play08:12

different sorts of ions present in our

play08:14

brain so there are ions inside and ions

play08:16

outside

play08:17

hence

play08:18

the flow of these ions create an

play08:21

electric charge in our brain now if you

play08:23

think very carefully that is exactly

play08:25

what we're doing in a normal or a

play08:26

typical computer right we are

play08:28

controlling the flow of charge

play08:31

that's how our computer works to be

play08:33

honest

play08:33

so

play08:35

so that's that is exactly where the

play08:38

analogy of neuromorphic computing comes

play08:40

from

play08:41

what we are trying to do is that we are

play08:43

trying to replicate this exact

play08:46

architecture and put it on a solid state

play08:49

device or in a solid state right

play08:52

and

play08:52

so

play08:53

what we need to understand or we need to

play08:55

realize is that for that we need to

play08:57

create

play08:58

the synapse and artificial synapse

play09:01

in order to create an artificial synapse

play09:04

we need something we need an electrical

play09:06

device that has a memory

play09:08

in case of a strap

play09:10

standard resistor you see

play09:12

if you put a voltage and pass a current

play09:14

through it right it doesn't have a

play09:16

memory of it happening it just it takes

play09:20

the current passes it through the entire

play09:21

circuit or the integrated circuits in

play09:24

case of computing language

play09:25

so

play09:26

we need something that will have the

play09:29

memory of what's happening to it in the

play09:30

past

play09:31

so

play09:32

for that there is something called

play09:34

memristors and

play09:36

everything that will

play09:38

further talk about

play09:40

every device or every

play09:42

uh every invention in neuromorphic

play09:45

computing that was done by hp or ibm it

play09:48

was all based on these members of

play09:50

devices

play09:52

so my memristor is

play09:54

something that has a memory of it

play09:56

uh of the current passing through it

play09:59

an extreme example of a memristor would

play10:02

be a fuse okay so you pass the current

play10:04

through it and it diffuses

play10:06

but

play10:07

that's not very useful right because

play10:09

it's dead but you could have a very less

play10:11

extreme version of it

play10:13

where you pass the current then you stop

play10:15

the voltage and

play10:18

you know it has a memory in that state

play10:20

the member still in that state has

play10:22

memory of the amount of current that

play10:23

you're passing through the amount of

play10:24

voltage that you have put it through

play10:27

so

play10:28

that is the analogy of neuromorphic

play10:30

computing and that is how we can build

play10:33

counter neuromorphic models of the human

play10:35

brain so this was a very electronic

play10:37

biased discussion regarding how we can

play10:40

actually make quantum neuromorphic

play10:42

models or you know models that work like

play10:45

a human brain let me get you into the

play10:48

computational discussion of it all so as

play10:50

you can

play10:51

see the connectivity part of the photo

play10:54

right here is that this is a

play10:56

convolutional neural network

play10:58

so instead of going into something as

play11:00

deep as the convolutional neural network

play11:02

let me start with the very basic right

play11:04

so an artificial neuron in itself isn't

play11:07

very effective

play11:09

but if you stack them up it can do a lot

play11:12

of things

play11:13

like face detection

play11:16

classification

play11:17

and many such fields that are covered in

play11:20

machine learning topics

play11:23

so

play11:24

in case of an artificial neutral network

play11:26

what happens is that

play11:28

if we talk about

play11:30

uh based on rosenblatt's perceptron

play11:33

right this example itself let's take

play11:36

an artificial neural network that can

play11:38

classify between a circle a square and a

play11:41

triangle

play11:42

so we have three layers the input layer

play11:45

the hidden layer and the output layer

play11:48

what happens is that this photo that is

play11:51

28 across 28 pixels in form of matrix

play11:55

grows or you know just passes through

play11:58

the input layers

play11:59

and they are connected through channels

play12:02

to the hidden layer

play12:04

and these channels are given a numerical

play12:07

value called weights

play12:09

and when passed through these channels

play12:11

through the hidden layer the hidden

play12:12

layer performs most of the computed uh

play12:15

computational work that our network

play12:17

requires right

play12:19

so

play12:20

it is passed through something called

play12:21

the bias

play12:23

and then the hidden layer goes through a

play12:25

threshold function and this threshold

play12:28

function is known as the activation

play12:30

function and whatever the result of the

play12:33

activation function is the one with the

play12:36

highest probability is being then

play12:38

reflected

play12:39

however we need to understand that if we

play12:42

test an artificial network directly then

play12:45

it won't work we need to train it

play12:48

just like we train any machine learning

play12:50

model right and how do we train it we

play12:53

train it by giving it or passing it

play12:55

through the actual output so we feed in

play12:58

the actual output

play13:00

with the model itself so it compares how

play13:04

many it got wrong and right and thus by

play13:06

calculating the error itself it self

play13:09

trains itself

play13:10

providing us

play13:12

a certain amount of accuracy based on

play13:15

how and what we are doing right

play13:17

so this is

play13:19

based on this concept we are actually

play13:21

trying to create an artificial synapse

play13:25

so we have been talking about uh

play13:27

neuromorphic systems and computers for a

play13:30

long time and you know how we can create

play13:33

them what is it exactly how our brain

play13:36

works etc etc

play13:38

but

play13:39

why do we need these neuropathic systems

play13:41

to be honest why or what use do these

play13:44

neuropathy systems would be right

play13:46

so

play13:47

the answer to that question is that

play13:49

neuromorphic systems have wide

play13:51

implications in the near future

play13:53

and let me tell you why it's because

play13:56

our brain and this

play13:58

is very

play13:59

flexible and

play14:01

it's

play14:02

it's adaptive to changes right

play14:04

now there is no machine learning model

play14:07

or any sort of artificial intelligence

play14:09

present at this very moment that can

play14:12

adapt to changes or it's flexible in its

play14:15

decision making and stuff like that

play14:17

right

play14:18

so

play14:20

the promise that neuromorphic computing

play14:22

holds for artificial intelligence is

play14:24

massive

play14:26

the major impact that

play14:28

neuromorphic computings can have

play14:30

is

play14:31

making supercomputers faster and on

play14:34

space operations now the inquiry would

play14:38

be

play14:38

what effect it would have on space

play14:40

operations right now space missions

play14:43

require high performance computing

play14:46

systems

play14:47

that

play14:48

can you know restrict themselves to size

play14:51

and weight and power

play14:53

and can work in very extreme

play14:56

environmental and operational conditions

play14:58

right that include extreme temperature

play15:01

high radiation power loss

play15:04

and

play15:05

what your neuromorphic computing systems

play15:08

can do is that it can help

play15:10

space vehicles to adapt and learn

play15:13

according to its environment

play15:15

and its changes hence

play15:18

it would you would not require a ground

play15:21

mission team to operate any space

play15:24

project now this is a very huge stretch

play15:28

and stride

play15:30

at the same time but it is very much

play15:32

possible

play15:33

as you can see at this very current

play15:35

moment

play15:36

there is massive amounts of data that

play15:39

needs to be

play15:40

interpreted

play15:42

by the grounds operation team in order

play15:45

to

play15:45

make a space mission successful right

play15:49

so instead of that even though all of

play15:52

that was is done on bond newman

play15:54

architecture instead of that we can use

play15:57

the neuromorphic computing nc's that

play15:59

would actually help to reduce the number

play16:03

of bytes to process images and as i've

play16:06

already said you know it can

play16:09

help the space vehicle in a lot of ways

play16:11

and it can also

play16:14

help

play16:15

you know reduce manpower in general

play16:18

and make the space missions are projects

play16:22

much more efficient that is the major

play16:24

impact of neuromorphic computing

play16:27

moving on

play16:29

all right so the neuromorphic

play16:31

architecture right it's connecting

play16:33

neuromorphic chips remember that i said

play16:35

that we need members

play16:38

your registers that have a memory of

play16:39

working

play16:41

this basically connecting neuromorphic

play16:43

chips this is basically an architecture

play16:45

of the membrane itself so we need to

play16:47

pile these membranes in lots and lots of

play16:49

piles right we need members uh

play16:54

in which which are very

play16:55

easily fabricable

play16:57

and you can synthesize them very easily

play17:00

right so you connect these members on

play17:03

top of each other

play17:04

and put millions of it on on a chip and

play17:07

that single chip can then work as an

play17:11

integrated circuit for a solid state

play17:13

device or anything that we create in the

play17:16

near future

play17:17

so art geomorphic systems available to

play17:20

us right

play17:21

or have we still been able to build

play17:24

neuromorphic systems right yes we have i

play17:27

mean not exactly but yes we have been

play17:30

able to develop neuromorphic chips

play17:33

that can compute much faster and

play17:36

efficiently and much closer to how our

play17:39

brain works right

play17:41

uh back in 2008 when members or the

play17:44

missing members desires as a research

play17:47

paper was titled came into effect

play17:49

um many thought that that was the thing

play17:53

that would have could have been used in

play17:54

neuromorphic systems and that would

play17:57

bring a revolution of a change

play17:59

now neuromorphic systems

play18:01

uh at this mo are being invested by more

play18:04

and more companies nowadays because

play18:07

they've come to realize that

play18:09

these systems can be the future of their

play18:13

entire generation

play18:16

in 2014 ibm introduced something called

play18:19

the true north that debuted with 64

play18:22

million neurons

play18:24

and ate 16 billion synapses

play18:29

it was it came right out of something

play18:31

called ibm's development facility

play18:34

and

play18:35

it was said to be one of the benchmarked

play18:39

neuromorphic ships to have been ever

play18:42

made till this date

play18:44

however in december 2020 while we were

play18:46

all in lockdown

play18:48

intel introduced its neural chip known

play18:51

as lui which used 64 of them of these

play18:55

chips to create a system of 8 million

play18:58

synapses which is much smaller than ibm

play19:01

and much more faster and efficient it is

play19:04

expected to reach around 100 million

play19:06

euros in the near future you can

play19:08

understand the amount of computing power

play19:10

we are talking about here

play19:12

so let me give you an example

play19:14

uh ibm's uh supercomputer the summit

play19:18

it can process around

play19:22

let me tell you

play19:23

500 pair of lobs of data and our brain

play19:26

on 20 watts can process around

play19:30

10 to the power 18 flops of data which

play19:33

is floating points per second

play19:35

operations

play19:37

and

play19:38

that makes our brain five times faster

play19:41

than the world's largest supercomputer

play19:43

so you can understand that how

play19:46

important or how efficient it could be

play19:49

if you can slowly slowly make

play19:52

systems that are

play19:54

much more

play19:55

closer to how our brain works

play19:59

coming on to two different things we

play20:01

will talk about right now first is the

play20:03

neural grid and second is ibm's true now

play20:06

well as it's written here already it's a

play20:09

16-chip system that emulates a millions

play20:11

of neurons with billions of connection

play20:13

it looks just like as you can see in the

play20:15

image on the right hand side

play20:17

it specifies that it mobs analog

play20:19

property of the neurons of the brain by

play20:21

using sub threshold analog logic

play20:24

so it basically uses the very

play20:27

basic or the very primitive concept of

play20:30

transistors that is having the electric

play20:32

analog circuits

play20:34

that can actually mimic neurobiological

play20:37

systems that was introduced by

play20:38

scarborough men neurobreed was one of

play20:41

the most oldest neuromorphic systems or

play20:44

to be honest the neuromorphic

play20:46

transistors to be introduced in the

play20:48

market

play20:49

it uses asynchronous digital logic for

play20:52

communication

play20:54

what it means is that it is it does not

play20:57

continuously process data

play20:59

it slowly divides it into small chunks

play21:03

and processes data

play21:05

part by part into different neurons

play21:07

that's how our brain works right it

play21:09

processes data and we use only a small

play21:12

part of our brain not the entire part of

play21:15

it to process a certain amount of data

play21:17

coming on to ibm's true north it comes

play21:19

from ibm's cognitive computing division

play21:21

as i said it's also known as the

play21:23

development division it's 16 times the

play21:25

size of neurograde in 2014 ibm

play21:27

introduced this even if it's

play21:29

it's much bigger than what the primitive

play21:31

neural grid was it is much more

play21:34

efficient than the newer grid itself

play21:36

and instead of the sub-threshold analog

play21:39

they're completely digital that means

play21:41

that it uses what i have already

play21:43

mentioned a thousand times till now the

play21:45

memristor devices instead of the normal

play21:48

vlsi systems that were used by the

play21:50

government

play21:53

coming on to the challenges to using

play21:54

neuromorphic systems

play21:56

the basic challenge of rheomorphic

play21:58

system is the design and the analysis or

play22:02

the structure of it all

play22:04

why i'm talking like this it's because

play22:07

we

play22:08

as uh like we as coders or we as people

play22:11

study in computer science we know that

play22:14

how tough it is to learn a new language

play22:16

right made be python c plus plus or java

play22:20

and these languages are what are used to

play22:22

code any sort of

play22:25

machine learning or you can say create a

play22:28

website right so in case of a very new

play22:31

technology which is the neuromorphic

play22:33

system right we might have to create a

play22:36

new completely new programming language

play22:38

and that has a lot a lot of challenges

play22:42

to it right it has new generations of

play22:44

memory storage sensor technologies to be

play22:47

introduced even polymorphic neuromorphic

play22:49

neuromorphic systems may even require a

play22:52

major change in the hardware itself as i

play22:54

said we need to create a processor and

play22:58

the memory at the same location

play23:00

that is very tough to do ladies and

play23:02

gentlemen

play23:04

now

play23:05

do we know enough the fundamental

play23:07

question is that do we know enough can

play23:09

we predict the future can we

play23:12

scale

play23:13

whether

play23:14

uh neuromorphic computers would be as

play23:16

useful as we are you know from making

play23:19

them sound in today's date right for

play23:22

example as it's already mentioned here

play23:24

the glial cells which are the brains of

play23:26

support cells right

play23:28

which are not which are very prominent

play23:30

in you know

play23:32

in how our brain works

play23:33

in how our brain processes data how

play23:36

flexible our brain is however that is

play23:39

not the case in any neuromorphological

play23:41

designs

play23:43

we do not take into consideration

play23:46

many such small parts that are present

play23:48

in our brain that help us take certain

play23:52

decisions at certain time

play23:54

because we can only think at a very

play23:57

basic level such as the neurons the

play23:59

synapses so on this basic level we can

play24:02

create something but if we delve deep

play24:04

inside

play24:06

the brain itself for example the human

play24:09

emotions can you replicate them that has

play24:12

been one of the most biggest questions

play24:14

in today's world so the question still

play24:16

stands or the inquiry is still there do

play24:20

we know enough about

play24:22

how our brain works or we can we

play24:25

replicate

play24:26

enough properly

play24:29

or we can make neuromorphic computers in

play24:31

such a way that

play24:33

it exactly mimics and replicates

play24:37

what our brain does

play24:40

thank you for attending the seminar

play24:42

and see you guys later

Rate This

5.0 / 5 (0 votes)

Связанные теги
Neuromorphic ComputingArtificial IntelligenceMachine LearningBrain MimicryEnergy EfficiencyCognitive ComputingIBM TrueNorthIntel LoihiSeminar InsightsTech Innovation
Вам нужно краткое изложение на английском?