Neuromorphic Intelligence: Brain-inspired Strategies for AI Computing Systems
Summary
TLDRGiacomo Indiveri from the University of Zurich and ETH Zurich discusses brain-inspired strategies for developing low-power artificial intelligence computing systems. He highlights the limitations of current AI algorithms, which consume significant energy and are less versatile than natural intelligence. Indiveri introduces neuromorphic engineering as a promising approach, emphasizing the importance of emulating the brain's structure and function to create efficient, compact, and intelligent devices. He showcases the Dynamic Neuromorphic Asynchronous Processor as an example of this technology and its applications in fields like industrial monitoring and machine vision.
Takeaways
- 🧠 The talk by Giacomo Indiveri from the University of Zurich and ETH Zurich focuses on brain-inspired strategies for low-power artificial intelligence computing systems.
- 📈 The success of AI algorithms and neural networks, which originated in the late 80s, has recently surged due to advancements in hardware technologies, availability of large datasets, and improvements in algorithms.
- 🔋 A significant challenge with current AI algorithms is their high energy consumption, with estimates suggesting the ICT industry could consume about 20% of the world's energy by 2025.
- 💡 The high power usage is largely due to the extensive data and memory resources required, particularly the energy spent moving data between memory and processing units.
- 🌐 The narrow specialization of AI networks is highlighted as a fundamental issue, contrasting with the general-purpose capabilities of natural intelligence found in animal brains.
- 🚀 Neuromorphic engineering, inspired by the structure and function of the brain, is presented as a promising approach to overcome the limitations of current AI systems.
- 🏫 The term 'neuromorphic' has been adopted by various communities, including those designing CMOS circuits to emulate brain functions, those developing practical devices for problem-solving, and those working on emerging memory technologies.
- 🔬 The biological neural networks differ significantly from simulated ones, utilizing time dynamics and the physics of their elements, with memory and computation co-localized within each neuron.
- 💡 The key to low-power computation lies in parallel arrays of processing elements with co-localized memory and computation, avoiding the need for data transfer between separate memory and processing units.
- 🌟 The potential of neuromorphic systems is demonstrated through various applications such as ECG anomaly detection, vibration anomaly detection, and intelligent machine vision, showcasing their potential for practical, energy-efficient solutions.
Q & A
What is the main focus of Giacomo Indiveri's talk?
-The main focus of Giacomo Indiveri's talk is on brain-inspired strategies for low-power artificial intelligence computing systems.
Why have artificial intelligence algorithms and networks only recently started outperforming conventional approaches?
-Artificial intelligence algorithms and networks have only recently started outperforming conventional approaches due to advancements in hardware technologies providing enough computing power, the availability of large datasets for training, and improvements in algorithms making networks more robust and performant.
What is the estimated energy consumption of the ICT industry by 2025 in relation to the world's total energy?
-By 2025, it is estimated that the ICT industry will consume about 20 percent of the entire world's energy.
Why are current AI algorithms considered to be power-hungry?
-Current AI algorithms are power-hungry because they require large amounts of data and memory resources, and the energy cost of moving data from memory to computing and back is very high.
How do neuromorphic computing systems differ from traditional artificial neural networks?
-Neuromorphic computing systems differ from traditional artificial neural networks by emulating the brain's structure and function more closely, using parallel arrays of processing elements with computation and memory co-localized, and leveraging the physics of the devices for computation.
What is the significance of the term 'neuromorphic' in the context of Giacomo Indiveri's work?
-In the context of Giacomo Indiveri's work, 'neuromorphic' refers to the design of systems that mimic the neural structure and computational strategies of the brain, aiming to create compact, intelligent, and energy-efficient devices.
What are the three main strategies Giacomo Indiveri suggests for creating low-power artificial intelligence systems?
-The three main strategies suggested for creating low-power artificial intelligence systems are using parallel arrays of processing elements with co-localized computation and memory, leveraging the physics of analog circuits for computation, and matching the temporal dynamics of the system to the signals being processed.
How does Giacomo Indiveri's approach to neuromorphic computing address the issue of device variability and noise?
-Giacomo Indiveri's approach addresses device variability and noise by using populations of neurons and averaging over time and space, which can reduce the effect of device mismatch and noise, and by exploiting the variability as an advantage for robust computation.
What are some of the practical applications of neuromorphic computing systems discussed in the talk?
-Some practical applications of neuromorphic computing systems discussed include ECG anomaly detection, vibration anomaly detection, industrial monitoring, intelligent machine vision, and consumer applications.
What is the Dynamic Neuromorphic Asynchronous Processor (DNAP) and what is its significance?
-The Dynamic Neuromorphic Asynchronous Processor (DNAP) is an academic prototype built at the University of Zurich and ETH Zurich. It is significant as it demonstrates the feasibility of neuromorphic computing with a thousand neurons organized in four cores, showcasing the potential for edge computing applications with low power consumption.
How does Giacomo Indiveri's research contribute to the field of neuromorphic intelligence?
-Giacomo Indiveri's research contributes to the field of neuromorphic intelligence by developing new architectures, packaging systems, and memory devices that are inspired by the brain's computational principles, aiming to create more efficient and powerful computing systems.
Outlines
🧠 Introduction to Brain-Inspired AI and Energy Efficiency
Giacomo Indiveri from the University of Zurich and ETH Zurich introduces the concept of brain-inspired strategies for low-power artificial intelligence computing systems. He discusses the history of AI, highlighting the resurgence of interest due to hardware advancements that provide the necessary computing power. The talk emphasizes the unsustainable energy consumption of current AI systems, which is projected to reach 20% of the world's energy by 2025. Indiveri points out the inefficiency of data movement between memory and processing units as a significant contributor to this energy consumption.
🌟 The Emergence of Neuromorphic Computing
The concept of neuromorphic computing is explored, which was first coined by Carver Mead in the 1980s. Neuromorphic computing aims to emulate the brain's efficiency in processing information. Three main communities are identified: those designing CMOS circuits to emulate brain functions, those building practical devices for problem-solving using spiking neural networks, and those developing nanoscale devices for non-volatile memory. The talk suggests that by combining new materials, architectures, and theories, neuromorphic computing could lead to more efficient and powerful AI systems.
🔄 The Temporal Dynamics of Biological vs. Artificial Neural Networks
Indiveri contrasts the temporal dynamics and physical computation of biological neural networks with the algorithmic simulation of artificial neural networks. He explains that while artificial networks simulate neuron properties, biological networks use the physical properties of neurons and synapses for computation. The talk emphasizes the importance of understanding these differences to develop more efficient computing systems, suggesting that the structure of biological networks is inherently the algorithm, unlike in artificial networks.
🔋 Strategies for Low-Power Neuromorphic Systems
The talk outlines strategies for creating low-power neuromorphic systems, focusing on the co-localization of computation and memory, the use of analog circuits for efficient computation, and the importance of temporal dynamics matching the processing needs. Indiveri argues for a paradigm shift in computing, moving from traditional CPU-memory architectures to brain-inspired architectures that interweave memory and computation in a distributed system, leading to significant power savings.
🤖 Practical Implementations and Applications of Neuromorphic Systems
Indiveri discusses practical implementations of neuromorphic systems, including the development of analog circuits with slow, non-linear dynamics that operate in parallel. He highlights the benefits of device mismatch and variability, which can be harnessed for robust computation. The talk also covers the advantages of using analog circuits over digital ones, such as avoiding the need for clock circuits, reducing the need for data conversion, and leveraging the physics of devices for complex operations. Indiveri presents examples of neuromorphic systems developed at the University of Zurich and ETH Zurich, including chips that can perform edge computing applications with low power consumption.
🚀 Advancing Neuromorphic Intelligence and Its Applications
The talk concludes with a look at the future of neuromorphic intelligence, emphasizing the potential to transfer academic knowledge into practical technology through startups. Indiveri mentions applications such as industrial monitoring, machine vision, and consumer electronics as promising areas for neuromorphic technology. He also discusses the Dynamic Neuromorphic Asynchronous Processor, an academic prototype that demonstrates the potential of neuromorphic systems for edge computing. The talk ends with a call to action for further development and application of neuromorphic intelligence in society.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Convolutional Neural Networks (CNNs)
💡Backpropagation
💡Neuromorphic Computing
💡Energy Consumption
💡Memory-Computation Co-location
💡Spiking Neural Networks (SNNs)
💡In-Memory Computing
💡Analog Circuits
💡Temporal Dynamics
💡Fault Tolerance
Highlights
The talk focuses on brain-inspired strategies for low-power artificial intelligence computing systems.
Artificial intelligence algorithms and networks have their roots in the late 80s but only recently started outperforming conventional approaches.
The success of AI is attributed to advancements in hardware technologies, availability of large data sets, and improvements in algorithms.
AI algorithms require significant memory and energy resources, leading to concerns about sustainability.
The ICT industry is projected to consume about 20 percent of the world's energy by 2025.
The movement of data between memory and computing units is a major factor in energy consumption.
AI algorithms are narrow and specialized, unlike the general-purpose nature of animal brains.
The backpropagation algorithm is a fundamental component of AI but differs from the brain's computational principles.
Neuromorphic engineering aims to emulate the brain's efficiency and could lead to breakthroughs in AI.
Neuromorphic systems use analog circuits and in-memory computing to reduce power consumption.
Biological neural networks use time dynamics and the physics of their elements, unlike simulated networks.
The brain's structure and architecture are intertwined with its computational processes, unlike traditional computer systems.
Neuromorphic systems use parallel arrays of processing elements with computation and memory co-localized.
Analog circuits in neuromorphic systems can perform complex operations more efficiently than digital counterparts.
The temporal domain is crucial in neuromorphic systems, with dynamics matched to the signals being processed.
Neuromorphic systems can achieve fast reaction times even with slow elements due to parallel processing.
The University of Zurich and ETH Zurich have been developing neuromorphic systems for edge computing applications.
Neuromorphic chips can be used for practical applications like industrial monitoring and intelligent machine vision.
Startups are now utilizing neuromorphic technology to solve real-world problems with low-power, high-efficiency AI systems.
Transcripts
hello
this is giacomo individi from the
university of zurich and eth zurich at
the institute of neuro-informatics it's
going to be a pleasure for me to give
you a talk about brain inspired
strategies for low-power artificial
intelligence computing systems
so the term artificial intelligence
actually has become very very popular in
recent times
in fact artificial intelligence
algorithms and networks go back to the
late 80s
and although the first successes of
these networks were demonstrated in the
80s only recently
these algorithms and the computing
systems started to outperform
conventional
approaches for solving problems
in fact in
the field of machine vision uh from 2009
on
in 2011 in fact the first convolutional
neural network trained using back
propagation achieved impressive results
that made the whole field explode
and the reason for the success of this
approach
really even though as i said it was
started many many years ago only
recently we started to be able to follow
this
success because um and achieve really
impressive performance
because the technologies the hardware
technologies started to provide
enough computing power for these
networks to actually really perform well
in addition there's been now
the availability of large data sets that
can be used to train such networks which
also were not there in the 80s
and finally
several tricks and hacks and
improvements in the algorithms have been
proposed to actually
make these networks very robust and very
performant
however they do have some problems
most of these algorithms require a large
amount of resources in terms of memory
and energy to be trained
and in fact if we
do an estimate and we try to see how
much energy is required by all of the
computational devices that are in the
world
to implement such
neural networks it is estimated that by
2025 the ict industry will consume about
20 percent of the entire world's energy
this is clearly a problem which is not
sustainable
the other
reason for or one of the main reasons
for these networks to be extremely power
hungry is because they are
requiring large amounts of data and
memory resources and in particular
they're required to move data from the
memory to the computing and from the
computing back to the memory so
typically memory is used
in dram chips and these dram are at
least a thousand five hundred times more
costly than any compute operation mac
operations in these cnn accelerators
so it's it's really not the fact that
we're doing lots of computation it's
it's really the fact that we're moving
bits uh back and forth
that is is
burning all of this energy
the other problem is more fundamental
it's not only related to technology it's
related to the theory and these
algorithms these algorithms actually as
i said are actually are very very uh
powerful in terms of recognizing images
and solving but they are very narrow in
the sense that they are very specialized
to only a very specific domain
these networks are programmed to perform
a limited set of tasks and they operate
within a predetermined and predefined
range
they are not nearly as general purpose
as uh animal brains are so even though
we do we do call it artificial
intelligence it's really different from
natural intelligence the type of
intelligence that we see in in animals
and in humans
and the backbone of these artificial
intelligence algorithms is the back
propagation algorithm or if we're
looking at time series and sequences the
back propagation through time bpttt
algorithm
this this is uh really an algorithmic
limitation even though it can be used to
solve very powerful problems
trying to improve this bpdt
by making incremental changes is
probably not going to lead to
breakthroughs in understanding how to go
from artificial intelligence to natural
intelligence
and the way the brain works is actually
quite different from back propagation
through time
if you look at neuroscience and if you
study real neurons and real synapses
and
the sort of computational principles of
the brain you will realize that it's
there there's a big difference so
this problem has been recognized by many
communities many agencies there is for
example a recent paper by john shelf
that shows how to go beyond these
problems and try to improve performance
of computation
and if we look at this particular path
that basically tries to put together new
architectures new packaging systems with
new memory devices and new theories
one of the most promising approaches is
the one that is here listed as
neuromorphic
so what is this neuromorphic this is the
sort of the the bulk of this talk that i
am going to show you what we can do at
the university of zurich but also at the
startup that comes out of the university
of zurich instance with this type of
approach which is as i said taking the
best of the new materials and devices
new architectures and new theories and
trying to go really beyond what we have
today
so the term neuromorphic was actually
invented or coined many many years ago
by carver mead in the late 80s
and is now being used to describe
different things there's at least three
big communities that are using the term
neuromorphic
the original one that goes back to
carver mead was referring to the design
of cmos electronic circuits that were
used to emulate the brain
basically as a basic research attempt to
try to understand how the brain works by
building
circuits that are equivalent so trying
to really reproduce the physics
and and because of that these circuits
were using sub-threshold analog
transistors
for the neurodynamics and the
computation and asynchronous digital
logic for communicating spikes across
chips across cores it was really
fundamental research
the other big community that now started
to use the term neuromorphic is the
community building uh
practical devices for you know solving
practical problems
in that case these these this community
is is building chips that can implement
spiking neural network accelerators or
simulators not emulation but but really
now at this point it's it's more an
exploratory approach it's being used to
try to understand what can be done
with this approach of using digital
circuits to simulate spiking neural
networks
finally the last community or another
large community that is started to use
the term neuromorphic is the one that
has been developing emerging memory
technologies looking at nanoscale
devices to implement long-term
non-volatile memories
or if you like memoristive devices
so this community also started using the
turn neuromorphic because these devices
they can actually store
a change in the conductance which is
very similar to the way the real
synapses work when that they actually
change their conductance when they
change their synaptic weight
and
this allows them to build in memory
computing architectures that are also as
you will see very similar to the way
real biological neural networks work and
it can really create high density arrays
so we can actually by using analog
circuits the approach of simulating
digital um spike neural networks and by
using in-memory computing technologies
the hope is that we create a new field
which i'm calling here neuromorphic
intelligence that will lead to the
creation of
compact intelligent brain inspired
devices
and really to understand how to do these
brain inspired devices it's important to
look at the brain to go back to carver
meats approach and really do fundamental
research in studying biology and try to
really get the best out of all all of
these communities of the devices of the
sort of the computing principles using
simulations and and machine learning
approaches but also of neuroscience and
studying the brain
and so here i'd like to just to
highlight the main differences that are
there between
simulated artificial neural networks and
really
the biological neural networks those
that are in the brain
in simulated artificial neural networks
as you probably know there is a weighted
sum of inputs the inputs are all coming
in a point neuron which is basically
just doing the sum or the integral of
the inputs and multiplying all of them
by a weight so it's it's really
characterized by a big uh weight
multiplication or matrix multiplication
operation and then there is a
non-linearity either a spiking
non-linearity if it's a spike neural
network or a thresholding non-linearity
if it's an artificial neural network
in biology the neurons are also
integrating all of their synaptic inputs
with different weights so there is this
analogy of weighted inputs but it's all
happening through the physics of the
devices so the the physics is playing an
important role for computation
the synapses are not just doing a
multiplication they're actually
implementing some temporal operators
integrating applying non-linearities uh
dividing summing it's much more
complicated than just a weighted sum of
inputs
in addition the neuron actually has an
axon and it's sending its output through
the axon using also basically an all or
none event a spike
through time because the longer the axon
the longer it will take for the for the
spike to travel and reach the
destination
and depending on how thick the axon is
how much myelination there is it will be
slower or faster so also here the
temporal dimension is is really
important
in summary if we really want to see the
big difference is that artificial neural
networks the one that are being
simulated on computers and gpus are
actually algorithms that simulate some
basic properties of real neurons
whereas biological neural networks
really use time dynamics and the physics
of their computing elements to run the
algorithm actually in these networks the
structure of the architecture is the
algorithm there is no distinction
between the hardware and the software
everything is one and understanding how
to build
these types of hardware architectures
wet wear or hardware
using cmos using memristors maybe even
using alternative you know dna computing
or other approaches
will
hopefully and probably lead to much more
efficient and powerful computing systems
compared to the artificial neural
networks so if we want to understand how
to do this we really need to do a
radical paradigm shift in computing
standard computing architectures are
basically based on the phenomenon
system where you have a cpu on one side
and
memory uh on the other and as i said
transferring data back and forth from
the cpu to the memory and back is
actually what's burning all the power
doing the computation inside the cpu is
much much uh more energy efficient and
less costly than transferring the data
in brains what's happening is that
inside the neuron there are synapses
which store
the value of the weight so
memory and computation are co-localized
there is no transfer of data back and
forth everything is happening at the
synapse at the neuron and there is many
distributed synapses as many distributed
neurons so the memory and the
computation are intertwined together in
a distributed system
and this is really a big difference so
if we want to understand how to really
save power we have to look at how the
brain does it we have to use these brain
inspire strategies and the main three
points that i'd like you to remember is
that you we have to use basically
parallel arrays of processing elements
that have computation and memory
co-localized and this is radically
different from time multiplexing a
circuit here for example if we have one
cpu two cpus but even 64 cpus to
simulate
thousands of neurons we are time
multiplexing the integration of the
differential equations in these 64 cpus
here if we look at how to do it
following this brain inspired strategies
if we want to emulate a thousand neurons
we really have to have a thousand
different circuits that are laid out in
the in the layout of the of the chip of
the wafer and then run these
through their physics through the
physics of the circuits analog circuits
digital circuits but they have to be
many parallel circuits that operate in
parallel with the memory and the
computation co-localized that's really
the trick to to save power
the other is if we have analog circuits
we can use the physics of the circuits
to carry out the computation that really
instead of abstracting away some
differential equations and integrating
numerically the differential equation we
really use the physics of the device to
carry out the computation it's much more
efficient in terms of power latency time
and area
and finally the temporal domain is
really important the temporal dynamics
of the system have to be well matched to
the signals that we want to process so
if we want to have very low power
systems and for example we want to
process speech we have to have elements
in our computing substrate in our brain
like computer that have the same time
constants speech for example phonemes
have time constants of the order of 50
milliseconds so we have to slow down
silicon to have dynamics and time
constants of the order of 50
milliseconds so our our chips will be
firing and going at you know hertz or
maybe hundreds of hertz but definitely
not that megahertz or gigahertz like our
cpus or our gpus are doing
and and by having parallel arrays of
very slow elements we can still get very
fast computation even if we have slow
elements it doesn't mean that we don't
have a very fast reactive system
it's because they're in working in
parallel and so at some point there will
always be one or two of these elements
that are about to fire whenever the
input arrives and we can have
microsecond nanosecond reaction times
even though we have millisecond dynamics
and this is another key trick to
remember if we want to understand how to
do this radical paradigm shift
and at the university of zurich at eth
at the institute of neuroinformatics
we've been building these types of
systems
for many many years and we are uh now
also building these systems at our new
startup at since
the type of systems are shown here
basically we create arrays of neurons
with analog circuits
these circuits as i told you are slow
they have slow temporal non-linear
dynamics
as i told you they are massively
parallel we do massively parallel
operations all of the circuits work in
parallel
the fact that they are analog actually
brings this
feature that are that is basically
device mismatch all the circuits are
inhomogeneous they are not equal and
this actually can be used as an
advantage to carry out robust
computation it's counter-intuitive but i
will show you that it's actually an
advantage to have variability in your
devices and this actually is also very
nice for people doing memory still
devices that are typically very very
variable
the other features are that they are
adaptive all of these circuits that we
have have negative feedback loops they
have learning
adaptation plasticity so the learning
actually helps in creating robust
computation through the noisy and
variable elements
by construction there are many of these
in working in parallel even if some of
these stop working the system is fault
tolerant you don't have to throw away
the chip like you would with a standard
processor if one transistor breaks
probably performance will degrade
smoothly but at least the system will be
fault tolerant and because we use both
the best of both worlds analog circuits
for the dynamics and digital circuits
for the communication we can program the
routing tables and configure these
networks so we have flexibility in being
able to program these dynamical systems
like you would program a neural network
on a cpu on a computer
uh of course it's it's more complex we
still have to develop all the tools and
and simpsons and and other colleagues
around the world are still busy
developing the tools to program these
dynamical systems
it's not nearly as well developed as you
know having a java or a c or a python
piece of code but
there is very promising work going on
and now the question always comes why do
you do it if analog is noisy and
annoying in homogeneous why do you go
through the effort of building these
analog circuits so let me just try to
explain that there are several
advantages if you think of having large
networks in which you have many elements
working in parallel for example these
memorizative devices in a crossbar array
and you want to send data through them
these membership devices if you use the
physics they use analog variables so
if you just send these variables in an
asynchronous mode you don't need to use
a clock so you can avoid using digital
clock circuitry which is actually
extremely expensive in terms of area
requirements in large complex chips and
extremely power hungry so avoiding
clocks is is something really really
useful
if we don't use digital if we're staying
analog all the way from the input to the
output we don't need to convert we don't
need to convert from digital to analog
to to run the physics of these
memristors and we don't need to convert
back from analog to digital and these
adcs and these dacs are actually very
expensive in terms of size and power so
again if we don't use them we save in
size and power
if we use transistors to do for example
exponentials we don't need to have a
very complicated uh digital circuitry to
do that so again we can use a single
device that through the physics of the
device can do complex nonlinear
operation that saves area and power as
well
and finally if we have analog variables
like variable voltage heights variable
voltage pulses widths
and and
other types of variable currents we can
control the properties of the
of the devices that we use these
memorized devices we can make them uh
depending on how strong we we drive them
we can make them volatile or
non-volatile we can use their intrinsic
non-linearities
depending on how strongly we derive them
we can even make them switch with a
probability so we can use their
intrinsic stochasticity to do stochastic
gradient descent or to do probabilistic
graphical networks to do probabilistic
computation
and we can also use them in their
standard way of operation in their
non-volatile way of operation as
long-term memory elements so we don't
need to shift data back and forth from
peripheral memory back we can just store
the value of the sinuses directly in
these membrane devices
so if we use analog for our neurons and
synapses in cmos
we can then best best benefit the use of
future emerging memory technologies uh
reducing power consumption
and uh in a very recent in the last
escas conference which was just you know
a few weeks ago we did show with the pcm
trace algorithm or or the pcm series
experiments that we can exploit the
drift of pcm devices which are these
shown here in a picture from ibm
to implement eligibility traces which is
a very useful feature to have for
reinforcement learning
so if if we are interested in building
reinforcement learning algorithms for
example for for having behaving robots
that run with
brains that are implemented using these
chips we can actually take advantage of
the properties of these pcm devices
none that are typically thought as
non-idealities we can use them to our
advantage for computation now
analog circuits are noisy i told you
there are the they are variable in
homogeneous for example if you take one
of our chips you you stimulate the
neurons with the same current to all the
neurons to 256 different neurons and you
see how long it takes for the neuron to
fire
not only these neurons are slow but
they're also variable the time at which
they fire can greatly change depending
on which circuit you're using
and there is this noise which is usually
typically you have variance of 20
over the mean so the coefficient of
variation is about 20 percent
so the question is how can you do robust
computation using this noisy
computational substrate and the obvious
answer the easiest thing that people do
when they have noise is to average
so we can do that we can average over
space and we can average over time if we
use populations of neurons not single
neurons we can just take you know two
three four six eight neurons and look at
the average time that it took for them
to spike or if they're spiking uh
periodically we can look at the average
firing rate
and then if we integrate over long
periods of time we can average over time
so these these two strategies are going
to be useful for uh reducing the effect
of device mismatch if we do need to have
precise computation
and we are doing experiments in this in
these very days this is actually a very
recent experiment where we took these
neurons and we put two of them together
four of them together eight sixteen if
you look at the cluster size basically
that's the number of neurons that we are
using for the average over space
and then we are computing the um
firing rate over two milliseconds five
milliseconds 50 milliseconds 100 and so
on and then what we do is we calculate
the coefficient of variation basically
how much device mismatch there is the
larger the coefficient of variation the
more noise the smaller the coefficient
of variation the less noise so we can go
from a very large coefficient of
variation of say 12
actually 18 as i said 20
by integrating over long periods of time
or by integrating over large numbers of
neurons we can decrease this all the way
to 0.9
and you can take this coefficient of
variation and you can calculate the
equivalent number of bits if we were
using digital circuits how many would
bits would this correspond to so for by
just integrating over larger number of
neurons and over longer periods we can
have for example a sweet spot where we
have eight bit resolution just by using
16 neurons
and integrating for example over 50
milliseconds
this can be changed at runtime if we if
we want to have a very fast reaction
time and a course idea of what the
result is we can have only two neurons
and integrate only over two milliseconds
then there's many false myths when we we
use spike neural networks people tell us
oh but if you have to wait until you
integrate enough it's going to be slow
if you have to average over time it's
going to take area all of these are
actually false myths that can be
debunked by looking at neuroscience
neuroscience has been studying how the
brain works the brain is extremely fast
it's extremely low power we don't have
to wait long periods of time to make a
decision
so if you use populations of neurons to
average out it's been shown for example
experimentally with real learners
that populations of neurons have
reaction times that can be even 50 times
faster than single neurons
so by using populations we can really
speed up the computation
if we use the populations of neurons we
don't need them to be every neuron to be
very highly accurate they can be noisy
and they can be very low precision but
by
using populations and using sparse
coding we can have very accurate
representation of data and there is a
lot of work for example by sophie the
nerve that has been showing how to do
this by by training populations of
neurons to do that
and now it should be also known in
technology that if you have variability
it actually helps in transferring
information across multiple layers
and here what i'm showing here is data
from one of our chips where we use 16 of
these neurons per core and we basically
provide some in desired target as the
input we drive a motor with a pid
controller and we minimize the error
it's just to show you that by using
spikes we can actually have very fast
reaction times in robotic platforms
using these types of uh chips that you
saw in the previous slides
in fact we've been building many chips
for many years also at since the the the
colleagues are building chips and the
latest one that we have built at the
university as a academic prototype
is um
is called the dynamic neuromorphic
asynchronous processor it was built
using a very old
technology 180 nanometer as i said
because it's an academic exercise but
still this has a thousand neurons it has
four cores of 256 neurons each we can
actually do very interesting edge
computing applications just with a few
hundred neurons and then of course then
the idea is to use both the best of both
worlds to see where we can do
analog circuits to really have low power
or digital circuits to have you know to
verify principles and make you know
practical problems solve practical
problems quickly and then by combining
analog and digital we can also there get
the best of both worlds
so to to conclude actually i would just
like to show you examples of
applications that we built
using this
this is
a long list of applications but if you
have
the slides you can actually click on the
on the references and you can get the
paper the ones highlighted in red are
the ones that have been done by simpsons
uh by our colleagues from instance on
ecg anomaly detection and the detection
of vibration anomalies was actually done
by simpsons and by university of zurich
in parallel independently and just go to
the last slide where i basically tell
you that we are now at a point where we
can actually use all of the knowledge
from the university about brain inspire
strategies
to develop this neuromorphic
intelligence field transfer all of this
know-how into
technology and in the in with new
startups that can actually use this
know-how to solve practical problems
and try to find you know what is the
best market for this
as i said industrial monitoring for
example for vibrations is something but
something that can also be done using
both sensors and processors is is really
what the simpsons company has been
developing and it's it's been really
successful at for doing intelligent
machine vision for putting both uh
sensing and
artificial intelligence algorithm on the
same chip
and having a very low power in the order
of microwave uh tens or hundreds of
microwatt power dissipation for solving
practical problems that can be useful in
society and in fact even solving
consumer applications
so with this um sorry i went a bit over
time but i would just like to thank you
for your attention
you
Voir Plus de Vidéos Connexes
The Basics of Neuromorphic Computing
Architecture All Access: Neuromorphic Computing Part 2
You don't understand AI until you watch this
The Next Generation Of Brain Mimicking AI
Energy Efficient AI Hardware: neuromorphic circuits and tools
Nick Bostrom What happens when our computers get smarter than we are?
5.0 / 5 (0 votes)