The Basics of Neuromorphic Computing
Summary
TLDROrange Banerjee's seminar presentation delves into neuromorphic computing, a technology mimicking the human brain's efficiency. Discussing its origins from the von Neumann architecture to the current advancements, Orange highlights the potential of neuromorphic systems in AI and space missions. He introduces key concepts like memristors and compares neuromorphic chips like IBM's TrueNorth and Intel's Loihi, emphasizing their energy efficiency and processing power. Challenges in designing and programming these systems are also addressed, questioning if our current understanding of the brain is sufficient to fully harness neuromorphic computing's potential.
Takeaways
- 🎓 Orange Banerjee, a student at Bennett University, is presenting on the topic of neuromorphic computing.
- 🧠 Neuromorphic computing aims to create hardware that mimics the neurobiological architectures present in the human nervous system.
- 💡 The concept of neuromorphic computing was invented by Carver Mead in the 1980s, focusing on VLSI systems to replicate brain-like functions.
- 🚀 The von Neumann architecture, which separates memory and CPU, is a bottleneck for AI and machine learning advancements compared to the brain's integrated approach.
- 🔋 Neuromorphic systems are more energy-efficient compared to traditional computing, which is crucial as technology advances and energy demands increase.
- 🌐 Moore's Law predicts the exponential growth of transistors on microchips, but this growth could lead to an unsustainable energy consumption for von Neumann architectures.
- 🤖 Neuromorphic computing holds promise for AI by potentially making supercomputers faster and enabling space operations with adaptable, learning systems.
- 🔗 IBM's TrueNorth and Intel's Loihi are examples of neuromorphic chips that have been developed to process information more efficiently.
- 🛠 The design and analysis of neuromorphic systems present challenges, including the need for new programming languages and hardware innovations.
- 🤔 There are still many unknowns in neuromorphic computing, such as the replication of human emotions and the full complexity of the brain's functions.
Q & A
What is the main topic of Orange Banerjee's seminar presentation?
-The main topic of Orange Banerjee's seminar presentation is neuromorphic computing.
Which university is Orange Banerjee pursuing his V-Tech in CSE from?
-Orange Banerjee is pursuing his V-Tech in CSE from Bennett University.
What is the von Neumann architecture and why is it significant in the context of neuromorphic computing?
-The von Neumann architecture is a computer architecture where the CPU and memory are separate, which leads to a bottleneck in data transfer. It's significant in neuromorphic computing because it contrasts with the human brain's architecture, which is more efficient and does not have such a bottleneck.
Who invented neuromorphic computing and what is its main principle?
-Neuromorphic computing was invented by Carver Mead in the 1980s. Its main principle is to create integrated circuits that replicate or mimic the neurobiological architecture present in the human nervous system.
What is Moore's Law and how does it relate to neuromorphic computing?
-Moore's Law states that the number of transistors on a microchip doubles every two years, and the cost of computers is halved. It relates to neuromorphic computing as it highlights the exponential growth of technology, which is challenged by the energy inefficiency of traditional architectures, making the energy-efficient neuromorphic systems more appealing.
Why is there a need for neuromorphic systems according to the presentation?
-There is a need for neuromorphic systems because traditional von Neumann architecture is energy-hungry and faces limitations in processing power and efficiency, especially when compared to the human brain's capabilities.
What is a memristor and how does it relate to neuromorphic computing?
-A memristor is an electrical device that remembers the amount of current or voltage that has passed through it. It is crucial in neuromorphic computing because it can mimic the synaptic behavior found in the human brain, allowing for the creation of artificial synapses.
What is the potential impact of neuromorphic computing on space operations?
-Neuromorphic computing can make space missions more efficient by reducing the need for ground mission teams, as it allows space vehicles to adapt and learn according to their environment, thus requiring less power and processing capabilities.
What are some of the challenges faced in developing neuromorphic systems?
-Challenges in developing neuromorphic systems include designing and analyzing the structure, creating new programming languages, and developing new generations of memory storage and sensor technologies.
What are the two neuromorphic systems mentioned in the presentation, and what are their key differences?
-The two neuromorphic systems mentioned are the Neural Grid and IBM's TrueNorth. The Neural Grid uses sub-threshold analog logic and is smaller in scale, while TrueNorth is larger, more efficient, and uses digital memristor devices instead of traditional VLSI systems.
How does the efficiency of the human brain compare to modern supercomputers in terms of processing power?
-The human brain is significantly more efficient than modern supercomputers. It can process around 10^18 floating points per second on just 20 watts of power, making it five times faster than the world's largest supercomputer, which is IBM's Summit.
Outlines
🧠 Introduction to Neuromorphic Computing
Orange Banerjee, a student at Bennett University pursuing a degree in CSE, introduces the topic of neuromorphic computing in a seminar presentation. He expresses his interest in AI and machine learning and acknowledges the support from his university and faculty. The presentation delves into the history of computing, highlighting the von Neumann architecture as the foundation of modern computer systems. Neuromorphic computing is presented as a revolutionary approach that mimics the neurobiological architecture of the human nervous system using VLSI systems. The speaker emphasizes the need to understand the brain's workings to advance in this field and discusses the limitations of the von Neumann architecture, such as the separation of CPU and memory, which contrasts with the brain's integrated processing and memory. The presentation also touches on Moore's Law and its implications for the future of computing, particularly the energy demands that could outstrip global energy budgets.
🔋 Energy Efficiency and Neuromorphic Systems
The second paragraph discusses the inherent latency and bottleneck in traditional computing architectures due to the separation of processing and memory units. It contrasts this with the brain's efficiency, where processing and memory are co-located, reducing latency. The human brain's remarkable energy efficiency is highlighted, noting that it operates on a mere 20 watts of power, which is comparable to the energy used by a cup of coffee. The speaker emphasizes the need for neuromorphic systems that can replicate the brain's efficiency and processing capabilities. An example of a neuron and an artificial neuron is used to illustrate the concept of data transfer through ions and the potential for creating artificial synapses using memristors, which have memory properties. The paragraph concludes with a call to explore the vast field of neuromorphic computing and the potential for creating systems that mimic the human brain's functionality.
🚀 Neuromorphic Computing for Space and AI
In the third paragraph, the speaker explores the potential applications of neuromorphic computing, particularly in space operations and artificial intelligence. Neuromorphic systems are highlighted for their adaptability and flexibility, which are currently unmatched by traditional AI systems. The potential for neuromorphic computing to make supercomputers faster and more efficient in space missions is discussed, along with the ability to reduce the need for ground mission teams. The speaker also touches on the importance of reducing data processing requirements and the potential for neuromorphic chips to revolutionize space exploration. The paragraph concludes with a discussion of the architecture of neuromorphic systems, emphasizing the need for memristors and the stacking of these components to create integrated circuits that can mimic the brain's structure and function.
💡 Neuromorphic Chips and Their Development
The fourth paragraph delves into the development of neuromorphic chips and their capabilities. The speaker mentions the progress made in the field, with companies like IBM and Intel investing in neuromorphic technology. IBM's TrueNorth chip, introduced in 2014, is highlighted for its 64 million neurons and 16 billion synapses, making it a significant milestone in neuromorphic computing. Intel's Loihi chip, introduced in 2020, is noted for its efficiency and potential to scale up to 100 million synapses. The speaker compares the processing power of neuromorphic systems to that of the human brain and supercomputers, emphasizing the potential for neuromorphic systems to outperform current technology. The paragraph also discusses the architecture of neuromorphic chips and the challenges of integrating them into existing systems.
🛠 Challenges and Future of Neuromorphic Computing
The final paragraph addresses the challenges and future prospects of neuromorphic computing. The speaker notes the difficulties in designing and analyzing neuromorphic systems, suggesting that new programming languages and hardware may be required. The limitations of current neuromorphic designs are discussed, particularly the lack of consideration for the full complexity of the human brain, such as glial cells and emotions. The speaker questions whether we have enough understanding of the brain to replicate its functions effectively. The paragraph concludes with a reflection on the potential of neuromorphic computing to revolutionize technology and the need for further research and development in this field.
Mindmap
Keywords
💡Neuromorphic Computing
💡Von Neumann Architecture
💡Moore's Law
💡Memristor
💡Artificial Neural Networks
💡CMOS Operations
💡Energy Efficiency
💡IBM's TrueNorth
💡Intel's Loihi
💡Synapse
Highlights
Introduction to neuromorphic computing by Orange Banerjee from Bennett University.
Neuromorphic computing aims to create devices that work like the human brain.
Historical context of computing architectures, including the von Neumann architecture.
Invention of neuromorphic engineering by Carver Mead in the 1980s.
Explanation of how neuromorphic computing mimics neurobiological architectures.
Moose Law and its implications for the growth of transistors and energy efficiency.
The limitations of von Neumann architecture and the need for neuromorphic systems.
Comparison of energy efficiency between the human brain and modern computers.
The concept of artificial neurons and how they mimic the structure of biological neurons.
Role of memristors in creating artificial synapses for neuromorphic computing.
Potential applications of neuromorphic computing in space operations and supercomputing.
Development of neuromorphic chips by companies like IBM and Intel.
IBM's TrueNorth chip with 64 million neurons and 16 billion synapses.
Intel's Loihi chip and its efficiency compared to IBM's TrueNorth.
Challenges in designing and analyzing neuromorphic systems.
The question of whether we know enough about the brain to replicate its functions in neuromorphic computers.
Closing remarks and thanks for attending the seminar.
Transcripts
[Music]
welcome to the seminar presentation my
name is orange banerjee and i'm pursuing
v-tech in cse from bennett university
well bennett university is a private
institution situated at greater noira
today's topic for presentation is
neuromorphic computing before moving
forward i would like to thank my
university and miss gagandeep kaur our
faculty for this subject in the semester
for giving me this opportunity to come
forward and explain something that i
really wanted to explore into
well i'm really interested in the very
concept of artificial intelligence and
machine learning hence neuromorphic
computing itself is a very unique
and
it's a topic that is not known by many
people so today my job here is to
explain you what exactly is neuromorphic
computing
if we go back in time
and try to realize how everything came
into this picture
or how we came to know about
neuromorphic computing then we have to
go back in 1945 or the fact that each
and every computer power and advances
were based on an architecture called the
von neumann architecture developed by
john von neumann and others
it was written on an unfinished paper
and ironically the most impressive
revolution in the history of technology
was done on basis on a half century old
design on an unfinished paper
well neuromorphic computing also known
as neuromorphic engineering was invented
by carbon med in the 1980s he talked
about the use of vlsi systems well that
is nothing but creating an integrated
circuit by combining millions of
transistors on a single chip
that consists of electronic analog
systems that replicate or mimic
neurobiological architecture present in
the human nervous system
so
in summary what i'm trying to say here
is that neuromorphic computing is
basically creating
a solid state device like our laptops
are
which would exactly work like our brain
and in order to do that we need to
understand as we move forward how our
brain works and how exactly can we do
that
well coming to the moon's law moose law
as mentioned by god mode is nothing but
the number of transistors on a microchip
would double every two years and that
the cost of computer will be halved it
has nothing to do with neuromorphic
computing but i'll tell you why i'm
getting here now the fact that
all everything is on uh the von neumann
architecture well now i've been
mentioning it too many times and i'll
tell you what exactly it is
in case of a von neumann architecture we
have a memory and a cpu and the
bottleneck is that we need to transfer
um
data
through these things
through these two
portions or sectors as you can see the
cpu and the memory however that's not
how our brain works our brain does not
transfer data from we don't have a cpu
here and a memory here that transfers
data right it's much more complex than
that so yeah so what von neumann does is
that it restricts
the development of these machines um
into something that would have higher
implications
um
coming into neural networking or
artificial neural networks or the very
concept of artificial intelligence right
so the fact is can we change the
substrate uh in the case of von neumann
architecture and make something brain
like another thing is that why i'm
mentioning moose law is that the one
human architecture is very energy hungry
as mentioned by moose law is that the
number of transistors in a microchip
would drop to double every two years it
basically speaks about the exponential
growth of
uh
of our technology in our recent
generations
by 2040 however it has been said that we
would require 10 to the power 27 joules
of energy to work on a von neumann
architecture to do cmos operations now
what is cmos operations it is nothing
but the operations that we are doing at
this current moment on every laptop
computers whatever that you take
whatever that comes into your mind right
so that is the problem here and that's
why i'm trying to tell you that the
moon's law is true however
that's what poses a huge threat
in the coming years
as
10 to the power 27 joules is currently
the budget of the entire world's energy
so you can understand the extent till
which what we are talking about here
that brings me to the point as to why do
we need moneyomorphic systems the entire
explanation
of the energy that requires the von
neumann uh the cmos operation
requires is the main reason as to why we
need neuromorphic systems right so the
thing is that uh if we if you try to
understand that there will be always be
a bottleneck or an inherent latency
right so there is this inherent latency
in the bond movement architecture for
the transfer of data to the cpu and the
memory right so the question here is the
issue here is why can't we
co-locate or why can't we have the
processor as well as uh the memory uh
together in one place
right so that that's what our brain does
right it has the processor and the
memory and it's at a single place
and it does the work unified right
so
again our brain is also very energy
efficient you need to understand that
just a banana or like a fruit or a
coffee that you drink in your daily
lives would fire you up right would
charge you up and you would be ready to
do something uh something really
tiresome right
and
this thing
works on 20 watts just that so if i'm
having a cup of coffee for example
and then i'll be fired up and ready to
do things like
uh face detection identification
and
many such things that take number of
cpus to train a machine learning model
right
and
that is
that's where the fact that we need
neuromorphic systems because on many
levels we can beat a computer our brain
can do that
so
how can we make a computer that works
like a human brain right and moving any
further forward i like to mention that
i can only scratch the surface of this
very field i can only give you the upper
layer of understanding of what or how
our neuromorphic computing works exactly
because it's a very very vast and wide
field to be honest let me give you an
example there was a research paper or a
survey that was done it was around 15 to
20 pages and the number of references on
it were 2
682
that's right that's how big of a or wide
of a field it is
so moving forward how does uh how to how
do we make a computer that works like a
human brain in order to understand that
we have to understand how a human brain
works so how
this exactly works right so let me give
you an example right here so this is
nothing but to make you understand
visually the structure of a typical
neuron and
uh artificial neuron and i'll try to
explain what it is but before moving on
to that
i'll tell you
our brain consists of neurons and
synapses we all know that
and the gaps between these neurons are
called synapses
and what happens is that data has been
transferred or these neurons communicate
with each other through
what we call neurotransmitters
and how do these work right we need to
know we these work on something called
iron flow
now ions what are ions these are nothing
but charged atoms so we've got the
potassium ion the chloride ion and
different sorts of ions present in our
brain so there are ions inside and ions
outside
hence
the flow of these ions create an
electric charge in our brain now if you
think very carefully that is exactly
what we're doing in a normal or a
typical computer right we are
controlling the flow of charge
that's how our computer works to be
honest
so
so that's that is exactly where the
analogy of neuromorphic computing comes
from
what we are trying to do is that we are
trying to replicate this exact
architecture and put it on a solid state
device or in a solid state right
and
so
what we need to understand or we need to
realize is that for that we need to
create
the synapse and artificial synapse
in order to create an artificial synapse
we need something we need an electrical
device that has a memory
in case of a strap
standard resistor you see
if you put a voltage and pass a current
through it right it doesn't have a
memory of it happening it just it takes
the current passes it through the entire
circuit or the integrated circuits in
case of computing language
so
we need something that will have the
memory of what's happening to it in the
past
so
for that there is something called
memristors and
everything that will
further talk about
every device or every
uh every invention in neuromorphic
computing that was done by hp or ibm it
was all based on these members of
devices
so my memristor is
something that has a memory of it
uh of the current passing through it
an extreme example of a memristor would
be a fuse okay so you pass the current
through it and it diffuses
but
that's not very useful right because
it's dead but you could have a very less
extreme version of it
where you pass the current then you stop
the voltage and
you know it has a memory in that state
the member still in that state has
memory of the amount of current that
you're passing through the amount of
voltage that you have put it through
so
that is the analogy of neuromorphic
computing and that is how we can build
counter neuromorphic models of the human
brain so this was a very electronic
biased discussion regarding how we can
actually make quantum neuromorphic
models or you know models that work like
a human brain let me get you into the
computational discussion of it all so as
you can
see the connectivity part of the photo
right here is that this is a
convolutional neural network
so instead of going into something as
deep as the convolutional neural network
let me start with the very basic right
so an artificial neuron in itself isn't
very effective
but if you stack them up it can do a lot
of things
like face detection
classification
and many such fields that are covered in
machine learning topics
so
in case of an artificial neutral network
what happens is that
if we talk about
uh based on rosenblatt's perceptron
right this example itself let's take
an artificial neural network that can
classify between a circle a square and a
triangle
so we have three layers the input layer
the hidden layer and the output layer
what happens is that this photo that is
28 across 28 pixels in form of matrix
grows or you know just passes through
the input layers
and they are connected through channels
to the hidden layer
and these channels are given a numerical
value called weights
and when passed through these channels
through the hidden layer the hidden
layer performs most of the computed uh
computational work that our network
requires right
so
it is passed through something called
the bias
and then the hidden layer goes through a
threshold function and this threshold
function is known as the activation
function and whatever the result of the
activation function is the one with the
highest probability is being then
reflected
however we need to understand that if we
test an artificial network directly then
it won't work we need to train it
just like we train any machine learning
model right and how do we train it we
train it by giving it or passing it
through the actual output so we feed in
the actual output
with the model itself so it compares how
many it got wrong and right and thus by
calculating the error itself it self
trains itself
providing us
a certain amount of accuracy based on
how and what we are doing right
so this is
based on this concept we are actually
trying to create an artificial synapse
so we have been talking about uh
neuromorphic systems and computers for a
long time and you know how we can create
them what is it exactly how our brain
works etc etc
but
why do we need these neuropathic systems
to be honest why or what use do these
neuropathy systems would be right
so
the answer to that question is that
neuromorphic systems have wide
implications in the near future
and let me tell you why it's because
our brain and this
is very
flexible and
it's
it's adaptive to changes right
now there is no machine learning model
or any sort of artificial intelligence
present at this very moment that can
adapt to changes or it's flexible in its
decision making and stuff like that
right
so
the promise that neuromorphic computing
holds for artificial intelligence is
massive
the major impact that
neuromorphic computings can have
is
making supercomputers faster and on
space operations now the inquiry would
be
what effect it would have on space
operations right now space missions
require high performance computing
systems
that
can you know restrict themselves to size
and weight and power
and can work in very extreme
environmental and operational conditions
right that include extreme temperature
high radiation power loss
and
what your neuromorphic computing systems
can do is that it can help
space vehicles to adapt and learn
according to its environment
and its changes hence
it would you would not require a ground
mission team to operate any space
project now this is a very huge stretch
and stride
at the same time but it is very much
possible
as you can see at this very current
moment
there is massive amounts of data that
needs to be
interpreted
by the grounds operation team in order
to
make a space mission successful right
so instead of that even though all of
that was is done on bond newman
architecture instead of that we can use
the neuromorphic computing nc's that
would actually help to reduce the number
of bytes to process images and as i've
already said you know it can
help the space vehicle in a lot of ways
and it can also
help
you know reduce manpower in general
and make the space missions are projects
much more efficient that is the major
impact of neuromorphic computing
moving on
all right so the neuromorphic
architecture right it's connecting
neuromorphic chips remember that i said
that we need members
your registers that have a memory of
working
this basically connecting neuromorphic
chips this is basically an architecture
of the membrane itself so we need to
pile these membranes in lots and lots of
piles right we need members uh
in which which are very
easily fabricable
and you can synthesize them very easily
right so you connect these members on
top of each other
and put millions of it on on a chip and
that single chip can then work as an
integrated circuit for a solid state
device or anything that we create in the
near future
so art geomorphic systems available to
us right
or have we still been able to build
neuromorphic systems right yes we have i
mean not exactly but yes we have been
able to develop neuromorphic chips
that can compute much faster and
efficiently and much closer to how our
brain works right
uh back in 2008 when members or the
missing members desires as a research
paper was titled came into effect
um many thought that that was the thing
that would have could have been used in
neuromorphic systems and that would
bring a revolution of a change
now neuromorphic systems
uh at this mo are being invested by more
and more companies nowadays because
they've come to realize that
these systems can be the future of their
entire generation
in 2014 ibm introduced something called
the true north that debuted with 64
million neurons
and ate 16 billion synapses
it was it came right out of something
called ibm's development facility
and
it was said to be one of the benchmarked
neuromorphic ships to have been ever
made till this date
however in december 2020 while we were
all in lockdown
intel introduced its neural chip known
as lui which used 64 of them of these
chips to create a system of 8 million
synapses which is much smaller than ibm
and much more faster and efficient it is
expected to reach around 100 million
euros in the near future you can
understand the amount of computing power
we are talking about here
so let me give you an example
uh ibm's uh supercomputer the summit
it can process around
let me tell you
500 pair of lobs of data and our brain
on 20 watts can process around
10 to the power 18 flops of data which
is floating points per second
operations
and
that makes our brain five times faster
than the world's largest supercomputer
so you can understand that how
important or how efficient it could be
if you can slowly slowly make
systems that are
much more
closer to how our brain works
coming on to two different things we
will talk about right now first is the
neural grid and second is ibm's true now
well as it's written here already it's a
16-chip system that emulates a millions
of neurons with billions of connection
it looks just like as you can see in the
image on the right hand side
it specifies that it mobs analog
property of the neurons of the brain by
using sub threshold analog logic
so it basically uses the very
basic or the very primitive concept of
transistors that is having the electric
analog circuits
that can actually mimic neurobiological
systems that was introduced by
scarborough men neurobreed was one of
the most oldest neuromorphic systems or
to be honest the neuromorphic
transistors to be introduced in the
market
it uses asynchronous digital logic for
communication
what it means is that it is it does not
continuously process data
it slowly divides it into small chunks
and processes data
part by part into different neurons
that's how our brain works right it
processes data and we use only a small
part of our brain not the entire part of
it to process a certain amount of data
coming on to ibm's true north it comes
from ibm's cognitive computing division
as i said it's also known as the
development division it's 16 times the
size of neurograde in 2014 ibm
introduced this even if it's
it's much bigger than what the primitive
neural grid was it is much more
efficient than the newer grid itself
and instead of the sub-threshold analog
they're completely digital that means
that it uses what i have already
mentioned a thousand times till now the
memristor devices instead of the normal
vlsi systems that were used by the
government
coming on to the challenges to using
neuromorphic systems
the basic challenge of rheomorphic
system is the design and the analysis or
the structure of it all
why i'm talking like this it's because
we
as uh like we as coders or we as people
study in computer science we know that
how tough it is to learn a new language
right made be python c plus plus or java
and these languages are what are used to
code any sort of
machine learning or you can say create a
website right so in case of a very new
technology which is the neuromorphic
system right we might have to create a
new completely new programming language
and that has a lot a lot of challenges
to it right it has new generations of
memory storage sensor technologies to be
introduced even polymorphic neuromorphic
neuromorphic systems may even require a
major change in the hardware itself as i
said we need to create a processor and
the memory at the same location
that is very tough to do ladies and
gentlemen
now
do we know enough the fundamental
question is that do we know enough can
we predict the future can we
scale
whether
uh neuromorphic computers would be as
useful as we are you know from making
them sound in today's date right for
example as it's already mentioned here
the glial cells which are the brains of
support cells right
which are not which are very prominent
in you know
in how our brain works
in how our brain processes data how
flexible our brain is however that is
not the case in any neuromorphological
designs
we do not take into consideration
many such small parts that are present
in our brain that help us take certain
decisions at certain time
because we can only think at a very
basic level such as the neurons the
synapses so on this basic level we can
create something but if we delve deep
inside
the brain itself for example the human
emotions can you replicate them that has
been one of the most biggest questions
in today's world so the question still
stands or the inquiry is still there do
we know enough about
how our brain works or we can we
replicate
enough properly
or we can make neuromorphic computers in
such a way that
it exactly mimics and replicates
what our brain does
thank you for attending the seminar
and see you guys later
関連動画をさらに表示
Architecture All Access: Neuromorphic Computing Part 2
You don't understand AI until you watch this
Neuromorphic Intelligence: Brain-inspired Strategies for AI Computing Systems
The Next Generation Of Brain Mimicking AI
[자막뉴스] 엔비디아 능가한 '초저전력 AI 반도체'...한국 세계 최초로 개발 / YTN
Energy Efficient AI Hardware: neuromorphic circuits and tools
5.0 / 5 (0 votes)