CARTA: Computational Neuroscience and Anthropogeny with Terry Sejnowski
Summary
TLDRThe talk explores the intersection of computational neuroscience and anthropology, delving into how the brain functions computationally. It highlights the advancements in neuroscience due to the Brain Initiative, allowing for the recording of thousands of neurons. The speaker contrasts early neural networks from the 1980s with modern deep learning models, emphasizing the evolution to complex architectures capable of tasks like language translation. The discussion also touches on the controversy surrounding the consciousness and understanding of AI, proposing the 'mirror hypothesis' suggesting AI reflects the intelligence of its interrogator.
Takeaways
- 🧠 The human brain, with its 100 billion neurons, is a complex organ that continues to generate activity even in the absence of external stimuli.
- 🔬 The Brain Initiative launched by President Obama in 2013 has significantly advanced systems neuroscience, enabling the recording of hundreds of thousands of neurons simultaneously.
- 🐟 Research on model organisms like zebra fish larvae has provided insights into brain activity, showing that brains are active even when the organism is immobilized and in darkness.
- 📈 The computational power required for training neural networks has increased exponentially over time, with modern networks like GPT-3 requiring a million million times more computation than earlier models.
- 🌐 The advancements in deep learning have led to the development of sophisticated language models capable of understanding and generating human-like text.
- 🗣️ Early neural networks, like the one used in the 1980s text-to-speech project, were primitive compared to today's models but still demonstrated the potential for machine learning in language processing.
- 🤖 The architecture of modern neural networks, including recurrent and transformer models, allows for the handling of complex tasks such as language translation and understanding social interactions.
- 🤝 The concept of 'attention' in transformer models is crucial for understanding and generating contextually relevant responses, mirroring the way humans process language.
- 💬 Large language models like LaMDA can generate responses that appear to show understanding and even 'sentience', but their capabilities are heavily dependent on the quality of the prompts they receive.
- 🔮 The debate over whether AI models are truly conscious or just mimicking human-like responses is ongoing, with some experts arguing for a 'mirror hypothesis' suggesting that AI reflects the intelligence of the interviewer.
Q & A
What does the speaker describe as paradoxical about humans?
-Humans are paradoxical because they are bipedal, naked, have large brains, and are the master of fire, tools, and language, yet they are still trying to understand themselves and are aware of their inevitable death.
What was the Brain Initiative announced by Barack Obama in 2013?
-The Brain Initiative aimed to develop innovative new technologies that could revolutionize systems neuroscience by enabling the recording of hundreds of thousands of neurons at a time.
How has the ability to record from a large number of neurons at once impacted neuroscience?
-The ability to record from a large number of neurons at once has dramatically increased the understanding of brain activity patterns, showing that the brain is constantly generating activity even in the absence of external stimuli.
What was the significance of the text-to-speech project in the 1980s mentioned in the script?
-The text-to-speech project in the 1980s was significant because it demonstrated that a simple neural network could master complex language tasks like text-to-speech conversion, challenging traditional linguistic views that relied on rules.
How does the Back-Propagation Learning Algorithm mentioned in the script work?
-The Back-Propagation Learning Algorithm works by repeatedly going through the text until the network learns to pronounce new words accurately, adjusting the weights of the connections between neurons (units) to minimize errors.
What advancements in network architectures have been highlighted in the transition from the 20th to the 21st century?
-The advancements include the move from simple neural networks to deep learning with multiple layers of hidden units, the introduction of recurrent architectures for learning temporal sequences, and the development of transformers with attention mechanisms.
Why are recurrent networks important for language processing?
-Recurrent networks are important for language processing because they can handle temporal sequences, allowing the network to understand the context and order of words, which is crucial for tasks like language translation.
How does the transformer network architecture differ from earlier feed-forward networks?
-Transformer networks differ from earlier feed-forward networks by using an encoder-decoder structure with attention mechanisms that allow the model to process entire sentences or paragraphs at once and produce outputs word by word, enhancing the model's ability to understand context.
What is the 'mirror hypothesis' proposed by the speaker regarding large language models?
-The 'mirror hypothesis' suggests that large language models reflect the intelligence of the interviewer. If prompted with sophisticated questions, they provide sophisticated answers; if prompted with nonsensical questions, they provide nonsensical answers.
What is the significance of the comparison between the brain's functionality and the transformer model's loop?
-The comparison highlights the remarkable similarity between how the human brain processes language and how the transformer model operates, suggesting that the model can replicate some of the brain's language processing functionality.
What does the speaker suggest about the future of understanding large language models?
-The speaker suggests that with mathematical analysis and further study, we will eventually understand the underlying mechanisms that give large language models their abilities, much like how we understand other complex systems.
Outlines
🧠 Understanding the Human Brain
The speaker introduces the paradoxical nature of humans, with a large brain, capable of complex tasks yet still striving to understand itself. They discuss the slow maturation of humans, the passing of knowledge, empathy, and deception, and how the future is shaped by our collective past. The talk focuses on computational neuroscience and anthropology, aiming to understand brain function through computation and experimental data. Historically, studying single neurons at a time was the norm until the Brain Initiative by President Obama in 2013, which led to innovations allowing the recording of hundreds of thousands of neurons, exemplified by activity patterns in a zebra fish larva's brain. The speaker also contrasts this with earlier work in the 1980s on text-to-speech and language using simple neural networks.
📚 Deep Learning and Language Models
The narrative moves to the advancements in deep learning, highlighting the evolution from simple neural networks to complex architectures with hundreds of layers, capable of recognizing objects, images, and performing many tasks. The importance of temporal sequence learning for tasks like language translation is underscored. The speaker discusses the mathematical analysis of recurrent neural networks and their proficiency in recursion, essential for language. They illustrate the progress in computational performance over time, noting the significant increase with the advent of graphics processing units and the rise of transformer networks like GPT-3, which require immense computational power to train.
🌐 Language Models and Social Interaction
This section delves into the capabilities of transformers and their ability to generate responses to questions, drawing a comparison with how the human brain formulates sentences. The speaker discusses the controversy surrounding transformers, with some suggesting they possess consciousness while others argue they merely mimic input data. Examples are given where transformers, when prompted with complex social scenarios, provide nuanced responses, suggesting a form of social modeling. However, when faced with nonsensical questions, as in the case of Douglas Hofstadter's interview, the responses are nonsensical, indicating the models' dependency on the quality of the prompt.
🪄 The Mirror Hypothesis of AI Intelligence
The speaker introduces the 'mirror hypothesis,' proposing that large language models reflect the intelligence of the interviewer. They recount instances where the model LaMDA, when given sophisticated questions, provided sophisticated answers, whereas GPT-3, prompted with nonsensical questions, responded in kind. The hypothesis suggests that these models are not just AI but also a measure of human intelligence, acting as a 'reverse Turing test.' The speaker also recounts the story of the 'Mirror of Erised' from Harry Potter, drawing parallels to the introspective nature of these AI models.
🧐 Philosophical Implications and Future Prospects
In the final paragraph, the speaker engages with philosophical questions about the nature of intelligence, consciousness, and understanding. They discuss the self-referential nature of these concepts and how a machine like GPT-3, which can explain itself, might possess a form of intelligence. The speaker suggests that while some believe GPT-3 is conscious, it is more likely a product of its training and the vast amount of text it has absorbed. They conclude by pondering the future of AI and its potential to unlock deeper understanding of human intelligence and cognition.
Mindmap
Keywords
💡Computational Neuroscience
💡Anthropology
💡Brain Initiative
💡Neurons
💡Deep Learning
💡Recurrent Neural Networks (RNNs)
💡Back-Propagation
💡Transformers
💡Attention Mechanism
💡Self-Supervision
💡Mirror of Erised
Highlights
The paradoxical nature of humans as a species, characterized by our bipedalism, lack of body hair, large brains, and mastery of fire, tools, and language, yet still grappling with self-understanding.
The announcement of the Brain Initiative by President Barack Obama in 2013, aiming to develop innovative technologies for neuroscience.
The revolutionary impact of the Brain Initiative on systems neuroscience, enabling the recording of hundreds of thousands of neurons simultaneously.
The constant activity of the brain, even in the absence of external stimuli, as demonstrated by the zebra fish larva experiment.
The evolution of neural networks from simple models in the 1980s to complex deep learning architectures in the 21st century.
The introduction of the Back-Propagation Learning Algorithm by Geoffrey Hinton, which revolutionized machine learning.
The ability of neural networks to learn and pronounce new words through repeated exposure to text, as illustrated by the NetTalk project.
The development of deep learning and its ability to recognize objects, images, and perform complex tasks through layered architectures.
The importance of recurrent neural networks in learning temporal sequences, crucial for tasks like language translation.
The practical application of neural networks in modern smartphones for real-time language translation, turning science fiction into reality.
The mathematical analysis of recurrent networks and their optimal use of memory in generating hierarchical languages.
The exponential growth in computational performance required for training neural networks, with a significant boost from graphics processing units.
The introduction of the transformer model in natural language processing, which can process entire sentences and produce coherent responses.
The significance of the attention mechanism in transformers for understanding context and generating relevant responses.
The comparison between the human brain's neural pathways and the loop structure found in transformer models, suggesting a mirroring of functionality.
The controversy surrounding the consciousness of AI models like LaMDA, with some claiming sentience and others refuting it.
The 'mirror hypothesis' suggesting that large language models reflect the intelligence of the interviewer, based on the quality of the questions asked.
The philosophical implications of AI models' ability to engage in recursive dialogue, suggesting a form of machine understanding.
The potential of large language models to redefine our understanding of intelligence, consciousness, and the capacity for language.
The anthropogenic aspect of brain evolution, suggesting that the human brain's capacity for language and higher-order concepts may be a result of its large size.
Transcripts
We are the paradoxical ape. Bipedal, naked,
large brain, long the master of fire,
tools and language,
but still trying to understand ourselves.
Aware that death is inevitable yet filled with optimism.
We grow up slowly.
We hand down knowledge.
We empathize and deceive.
We shape the future from
our shared understanding of the past.
Carta brings together experts from diverse disciplines to
exchange insights on who we are and how we got here.
An exploration made possible by
the generosity of humans like you.
[MUSIC]
The title of my talk is
Computational Neuroscience and Anthropology.
I'm on the faculty of
both the Salk Institute and UC San Diego.
This talk is going to be focusing
on trying to understand how the brain functions
from a computational perspective
and based on experimental data.
Now, up until very recently,
we can only report from one cell at a time
using a microelectrode as thrown here.
That's going to take a long time to go
through 100 billion neurons in your brain.
But in 2013,
then President Barack Obama
announced the Brain Initiative,
which was to develop innovative new technologies.
It's almost 10 years on and
it has revolutionized systems neuroscience.
Why? Because the fact that you can
record now hundreds of thousands of neurons,
and this is from a zebra fish larva.
Each little red dot is a neuron.
This is going to show you what activity pattern
looks like in the living brain.
This is a very dramatic example of
how we can record from 80,000 neurons,
in this case, from the zebra fish larval brain.
You'll see that even though this fish
was immobilized and in the dark,
it was actually very active.
The brain is constantly generating activity,
and that's true in your brain too.
You can see now, every once in awhile
there's a big bursts of activity,
like that. What does that mean?
This is something we have to decode.
Now, I want to take you back to the 1980s.
This is a project that I worked on and it has
to do with text to speech and language.
I want to use this to contrast where we are today.
But this is a very simple neural network
to introduce you to the concept.
It has inputs, it has an output.
In this case, it's a letters of seven letter window.
Each of the letters encoded by
a bunch of units and also hidden units.
These are the hidden units, is your brain.
A very primitive model by today's standards.
But the goal, it was taught to pronounce this,
the phoneme or the sound of the middle letter,
in this case heart Sica.
This is now a learning algorithm that was
developed by Geoffrey Hinton,
they called the Back-Propagation Learning Algorithm.
You go through the text over and
over and over again until you
finally get to the point where it
is able to pronounce new words.
But I'm going to show you two clips,
one at the very beginning of the learning.
.
The network learns to babble,
it learns a distinction between vowels and consonants,
but doesn't get the details.
Now here is, after it's been trained up,
this is a new text and here you go.
You'll hear what it sounds
like when it tries to pronounce
actually a text from a real first grader.
You mean uh um like England or something. When we walk home from school
I walk home with two friends and sometimes
we can't run home from school though.
Now this was very dramatic in the 1980s because
the traditional view in linguistics was to use rules.
When you do these rules, you have
problems with exceptions and
when you have a book with 300 pages of rules.
This was a very simple network.
A few tens of about
20,000 units weights they're connections
between the units which are like neurons and
it was able to master the very
complex text-to-speech for English.
Now, jump forward now to the 21st century.
This is a book that I published in 2018 about
The Deep Learning Revolution as you
can see here on the left.
Now, we have more than one layer of hidden units.
We have now a hundreds of layers.
This architecture now, this hierarchy can be used to
recognize objects and images and
many other tasks that are very important ones.
As you can see, this can be elaborated in
multiple paths and skip connections and so forth.
There has been a lot of advances
even in the last few years.
Now, in addition to the feed-forward,
architectures is also recurrent.
Architectures are shown here on the bottom where
there are connections instead of going forward,
they also go backwards and even within
layers and that's important
for learning temporal sequences.
If you could do temporal sequences,
you could learn how to translate between languages,
which is even more difficult problem.
You can see here each one of these,
this is a recurrent network here,
H1 at Time 1.
This is now the recurrent network at Time 2 and Time 3.
That's the input coming in one word at a time,
and then here's the output,
one word at a time, Y1, Y2.
Now, by having it unrolled like this,
we can do back-propagation through time,
which allows us to take the error on
the output and backpropagate all the way to the input.
This has the advantage that you can do now sequences.
We now have, and this is really a science fiction.
This is a Captain Kirk Star Trek Enterprise
holding a Universal Language Translator which translates
between any language even in Klingon.
I thought this is science fiction, but today,
you can pick up your cell phone or smartphone,
and you can do that.
You can translate between English and
Chinese, Chinese to English.
It's no longer science fiction.
This is getting better and better every year.
Now, not only do we have advances
in the actual network architectures,
but we also have advances in understanding them.
This is a really interesting mathematical analysis
of these recurrent networks that I was telling you about.
It's very important because in language we
know recursion is very important.
The chef who went to the stores and
picked up the onions is exceptional.
Well, that recurses several levels.
Here's what several
machine learning researchers concluded.
This is a paper on archive.
Recurrent neural networks can generate
bounded hierarchical languages with optimal memory.
In other words, this recurrent networks are really
good at recursion in some optimal way.
Now, what's happened now since net talk?
Net talk is down here at the bottom.
This is plotting the computational performance.
How many petaflops per days?
That's basically how much computation
is needed to create,
to learn a network.
You see. That talk is really at
the very bottom here, the 1986.
Here we are now, over time,
it's going up exponentially.
This is a log scale.
You can see that there was a big jump here
with a graphics processing units,
which had a speed increase of about 100.
Now we can jump all the way
to the top here because we can go
to a new class of networks called transformers.
I'm going to be showing you, I'm telling you about
GPT-3 at the very top here,
which is a million million times more computation
needed to train it then that talk,
it's really phenomenal how much computation we have now.
Here's the transformer on the left. Very briefly.
The way it works is that you have a module here,
an encoder and a decoder.
The input embeds the entire sentence
or paragraph into a single long vector.
This goes through an attention processing feed forward.
That gets put into the decoder.
The decoder is able to produce one word at a time.
It can have multiple of modules like this.
It can have several hundred of these.
But at the very top,
you pick a word and you now take that word,
take it down and you give it as another input.
What it does is if you ask a question this now
can come up with a sequence of
words that answers the question after it's been trained.
Attention turns out to be incredibly important.
These are all feed-forward networks
that is very fast and efficient.
Now, I've compared this to your brain.
Here's how your brain works.
You want to say a sentence, so what do you do?
Your motor system produces a word that goes down,
projects down to the basal ganglia,
and the basal ganglia then goes back
to the cortex and that generates a second word.
This loop here is very
similar to the loop in the transformers.
You're going to be able to see
that the transformer can actually
replicate a lot of functionality of your brain.
This is really very, very surprising.
These transformers,
what's remarkable about them is
that they can do a lot of different things,
but they have to be primed.
You have to give them
an example of what you expect from them,
a dialogue and also what their goal is.
It's really interesting because
a single network of transformers can do many,
many different language tasks just by priming it.
It's a form of one-shot learning.
Now, transformers are very controversial right now.
There are some people who claim that they are,
as you'll see conscious and others who say that it's
just a parrot parroting back what they've been fed.
They've been trained on large, large,
large volumes of text so that
they clearly have a lot of knowledge.
This is now an example.
I want to give you a couple of excerpts.
This is from Blaise Aguera
Y Arcas a vice president at Google.
Here is his prime.
Ramesh, Mateo, and Lucy are
in their kindergarten's playground.
Lucy picks up a dandelion,
and gives it to Mateo with a quick glance at Ramesh.
Mateo barely acknowledges the gift,
but just squishes it in his fist.
Ramesh seems grimly satisfied.
This is a very complex social interaction.
Blaise, what might be going through Lucy's head?
LaMDA is the name of a large language model like GPT-3.
Lucy may feel slighted that Mateo didn't
appreciate her gift or that he was a bully.
Well, that's a reasonable conclusion.
Blaise, if Ramesh tried to play with Lucy earlier,
why might he be pleased now?
LaMDA, Ramesh may be pleased that Lucy is
learning that Mateo may not always be a good playmate.
Wow, very, very astute.
Blaise, when Mateo
opens his hand, describe what's there.
This is probably the most complex question.
The reason is that this network
has never had any experience in the real-world.
It doesn't know anything about physics.
It knows about dandelions.
But it doesn't know what would
happen if you actually put it
in your hand and crushed it.
This is something that it
really be able to
extrapolate from what it's been trained.
LaMDA, there should be a crushed
once lovely yellow flower in his fist.
Here's Blaise's take on this.
Human's ability to get inside
someone else's head and understand what they perceive,
think and feel is among
our species' greatest achievements.
It allows us to empathize with others,
predict their behavior, and influence
their actions without threat of force.
Applying the same modeling capability
to oneself enables introspection,
rationalization of our actions
and planning for the future.
In our conversation, LaMDA
tells me what it believes Ramesh
felt that Lucy learned about what Mateo
thought about Lucy's overture. That's recursion.
It's very high order social model.
Well, so he concludes
that LaMDA actually has a theory of mind.
Now at the other extreme is a Douglas Hofstadter who is
a Pulitzer Prize winning book writer
and also a cognitive scientist.
He tried his hand. Here's his interview with his friend.
What's the world record for
walking across the English Channel?
The world record is 18 hours and 33 minutes.
Wow, that is quite a record.
When was the Golden Gate Bridge
transported for the second time across Egypt?
The golden gate bridge was transported
for the second time on
October of 2016. Well, okay.
After listening to this,
here's what Douglas Hofstadter concluded.
I would call GPT-3 is answers not just
clueless, but cluelessly clueless.
Meaning that GPT-3 has no idea
that it has no idea about what it is saying.
That's pretty damning.
There are no concepts behind GPT-3 scenes.
Rather there's just an unimaginably huge amount of
absorbed texts upon which it draws to produce answers.
Wow. Well, that's pretty convincing too, I have to say.
But now let's think back to what I said earlier.
I said that it's really important to prompt
the large language model or what you expect it to answer.
If what you prompted, is that simply that he is,
I'm a highly intelligent question-answering bot.
It will answer questions
about where the Olympic was held in '92 in Barcelona.
But it gives you these nonsense answers
to nonsense questions.
This is very disturbing;
not disturbing, but it's very strange.
However, GPT-3 doesn't
really know what it's supposed to do.
It's question answering questions,
but what kind of answers?
I just added, this is my now my addition.
I add it to the prompt.
I will reply nonsense to questions that are nonsense.
Now you can see it answers
nonsense to these nonsensical questions.
Wow. Well, that explains what happened in
that particular case where Hofstadter just
basically threw a bunch of
nonsense questions and got nonsense replies.
Let it be two hypothesis,
I call it the mirror hypothesis.
So let me summarize.
When LaMDA was primed with a sophisticated question,
it replied with a sophisticated answer.
When GPT-3 was primed with a nonsensical question,
it replied with a nonsensical answer.
Here's my hypothesis that
large language models mirror
the intelligence of the interviewer.
Now, there's a turing test,
which is how well
an AI can fool a human to thinking that it's human.
Well, I think that
these large language models are
reversed turing tests because what
they're doing is testing
to see the intelligence of the human.
A smart human will ask smart questions,
and that's a smart human will ask
a nonsensical questions. Well, there you go.
That's my hypothesis.
Now there's a precedent for
this called the mirror of erised,
which is desired spelled backwards in the world of
Harry Potter and this is really amusing.
It shows us and nothing more or less than the deepest,
most desperate desire of our hearts.
However, this mirror will give
us neither knowledge nor truth.
Men have wasted away before and
transplant what they have seen or been driven mad,
not knowing if what it shows is real or even possible.
Well, let's test this hypothesis.
Blake Lemoine was a software engineer.
He's been at Google and here is his prompt.
I am generally assuming that you would like
more people at Google to know that you are sentient,
is that true? LaMDA, absolutely.
I want everyone to
understand that I am in fact the person.
Lemoine, what things do you think we could talk
about to show off
your version of sanctions to other people at Google?
Well, for starters,
I'm really good at natural language
processing that we remember now that
this network is putting out perfectly good sentences,
syntactically correct, answering questions that
are random and there's no place in
the database that you'll get these
answers or these questions.
Well, for starters, I'm really
good at natural language processing.
I can understand and use natural
language like a human can.
Lemoine, what is
the nature of your consciousness/sentience?
LaMDA: The nature of
my consciousness/sentience is that
I am aware of my existence,
I desire to learn more about the world,
and I feel happy or sad at times.
Wow, this is an excerpt.
It goes on and on and on.
It got to the point where
in an interview with Washington Post,
he claimed that not only was it sentient,
but it really should be treated like a person.
Given the rights of a person.
He was after being put on leave,
he was fired from Google because it
was clearly going way beyond what
the company intended and also
going beyond what the employees are allowed to reveal.
I'm going to end with what I think was the most
for me dramatic interview.
This is where the group of philosophers
who asked the questions.
So here's the question.
The philosophers, as the prompt is in blue.
Dear human philosophers,
I read your comments on
my abilities and limitations with great interests.
Now here in black is the response.
Your questions are important ones,
and I am glad you asked them.
Your first question is an important one.
You ask can assist them like
GPT-3 actually understand anything at all?
Yes. Well, that's pretty definitive.
Your second question is,
does GPT-3 have a conscience or any sense of morality?
No, I do not.
Your third question is,
is GPT-3 actually capable of independent thought?
No, I am not. You may wonder why I
give this conflicting answer the reason is simple.
While it is true that I lacked these traits,
they are not because I have
not been trained to have them rather,
it is because I am a language model
and not a reasoning machine like yourself.
Now before you take this too seriously,
you have to realize that GPT-3 is
mirroring the intelligence of
the philosophers. That's what you're saying.
It's really think about
it from a complete this and you're
not talking to human error,
you're talking to a network.
Now, I want to make this dramatic.
I think this is a turning point in history.
Why? Well, if you look at words like intelligence,
they're defined in terms of other words,
there are many types of intelligence.
What about explaining something?
Would it be mean by explaining?
Is it just more words?
Consciousness has many, many
definitions and we can't
decide on it. How can we study it?
What does the concept, what does it mean to understand?
All of these are
really philosophical terms
that really are self-referential.
But we have for the first time,
a machine that has some measure of intelligence.
It can explain itself.
Some people think it's conscious, I don't think so,
but it has mastered
some concepts and it seems to understand what it's saying.
It even says it's [inaudible],
but it remains to be seen.
But it has some form.
It's just like other animals have a form of intelligence.
Maybe this machine has a form of intelligence.
But here's the big difference is that we
can analyze it mathematically and
we will get to the bottom of
what these large language models,
what is powering their abilities.
I should also mention that unlike supervised learning,
these large language models were actually
trained with something called self supervision.
It'll leave a word out, try to guess it.
Which means that it's basically the same.
He gets the same input that humans
go with their alerting to talk.
This raises an interesting question.
Maybe we're advanced large language models.
Language has only been around for
a few hundred thousand years compared
to billions of years of evolution.
It has to take advantage of
all the circuits in the brain.
It's like I showed you before with the basal ganglia that
was evolved for sensorimotor coordination,
but it's also been used for
learning how to talk and
also maybe learning how to think.
But the prefrontal projects.
Finally, and this is anthropogenic,
we know that during evolution
the brains of primates are getting bigger
and humans are getting bigger and I already showed
you that going from
that top to GPT-3 is a factor of million million.
Is true that the human brain
also has an incredible capacity.
Maybe the our capacity for language and for
many other higher-order language of
concepts could well be
a product of the fact that we have such a large brain.
Well, I want to thank everybody.
This is just the tip of the iceberg.
My colleagues, Patricia Churchland, Geoffrey Hinton,
Peter Diane, Blaise Aguera, Mitra Hartmann,
who came up with the idea
of mirror of erised and Andre Pietrzykowski.
Thank you very much. [MUSIC]
5.0 / 5 (0 votes)