What Is The Resolution Of The Eye?
Summary
TLDRIn this video, Michael from Vsauce explores the comparison between human vision and camera resolution. He explains that while cameras capture static images, human sight is a dynamic process, involving the brain's constant processing of data from the eyes. Although our eyes can differentiate fine details, human vision isn't as digital as pixels. Michael also discusses the limits of human vision, using the example of foveal resolution and blind spots. He concludes with a philosophical reflection on how life doesn't follow the structured narrative of movies, emphasizing life's continuous, unresolved nature.
Takeaways
- 🎥 Hollywood doesn't produce the most films annually—India does, followed by Nigeria.
- 👀 The human eye's resolution isn't comparable to a camera; it uses a complex process to perceive the world.
- 🧠 Our brain processes the information from our eyes, often filling in gaps like blind spots and filtering out unimportant details.
- 🎨 The fovea, which only covers 2 degrees of your field of view, provides optimal vision, with the rest being lower in resolution.
- 📸 Roger N. Clark estimated that the human eye can resolve up to 576 megapixels, but only around 7 megapixels are needed for the fovea's sharp focus.
- 🖼️ Human vision isn't digital, and perception isn't stored as a perfect snapshot like a digital camera file.
- 📺 Technologies like Retina Displays can already fool our eyes, showing that certain pixel densities are beyond what we can differentiate.
- 🎞️ Real life isn't like a movie; conflicts don't have neatly resolved endings, and life is continuous without a narrative resolution.
- 📐 Our visual system is more about continuous perception and top-down processing than discrete pixel resolution.
- 🔄 Life moves on after events in ways that movies can't portray, showing the difference between cinematic and real-life experiences.
Q & A
What countries produce the most feature films annually?
-India produces the most feature films annually, followed by Nigeria. Hollywood in the United States, though famous, does not produce the most.
How does human eye resolution compare to camera or screen resolutions?
-The human eye doesn't function like a camera. While camera resolutions are measured in pixels, the eye processes information differently, combining signals from various parts of the field of view. An estimate suggests the human eye has a resolution of about 576 megapixels, but this analogy is crude.
What is spatial resolution, and how does it affect how we see details?
-Spatial resolution refers to the ability to distinguish between nearby pixels or fine details. The more distinct adjacent pixels are, the more details can be resolved. Factors like lighting, sensor size, and how close the subject is all affect resolution.
Why do we not see our noses or glasses in our field of vision?
-The brain filters out unchanging stimuli, such as our noses or glasses, because they are not important for processing new visual information.
What is the role of the fovea in vision?
-The fovea is a small pit in the retina that handles central vision, providing optimal color vision and 20/20 acuity. It only covers about two degrees of our field of view.
How does the brain process blind spots in our vision?
-The brain fills in details and merges information from both eyes, compensating for the blind spots caused by where the optic nerve connects to the retina.
How many megapixels would be required to display an image indistinguishable from reality?
-If we consider the entire field of view, around 576 megapixels would be needed to create an image that the average human eye can't differentiate from reality. However, for the central vision processed by the fovea, only about 7 megapixels are required.
Why doesn't human vision work like a digital camera?
-Unlike a camera that captures static frames, our eyes constantly move, and the brain merges all the visual inputs into a processed perception. The resulting image is not made of pixels but is a dynamic, top-down interpretation of our surroundings.
Can humans have photographic memory?
-No scientific evidence supports the existence of photographic memory. Human memory is not stored with the same accuracy as a digital camera file.
What is the difference between life and movies in terms of narrative resolution?
-Unlike movies that often have clear beginnings, conflicts, and resolutions, life is continuous and doesn't resolve in neat, discrete ways. Life keeps going even after key moments pass.
Outlines
🎥 Human Vision vs. Camera Resolution: A Deep Dive
Michael from Vsauce begins by discussing the filmmaking industry, comparing production in Hollywood, Nigeria, and India. He transitions to examining how the human eye's resolution compares to that of cameras and screens, referencing formats like VHS, DVD, and IMAX. He explores the idea that pixel dimensions alone don't define resolution, emphasizing other factors like light, sensor size, and proximity. Using Salvador Dali's optical illusions and spatial resolution as examples, he explains how the human eye perceives and processes visual details differently from cameras. Michael raises the question of how many pixels would be needed for a screen to match human vision without noticeable pixelation, though he notes the complexity due to the brain’s processing of sight, which differs from the static frame capture of a camera.
👁️ The Human Eye's Pixel Power and Visual Processing
Michael continues his investigation into the resolution of human vision. He references Roger N. Clark’s estimate that the human eye can distinguish up to 576 megapixels when considering the full field of view. However, he clarifies that this number assumes perfect acuity across the entire visual field, which is unrealistic due to the fovea—a small part of the retina responsible for sharp central vision. During a single glance, the fovea processes only about 7 megapixels worth of information, with the peripheral vision needing just 1 additional megapixel. He highlights how modern technologies like Retina Displays already surpass the pixel density the human eye can distinguish at common viewing distances. Ultimately, Michael underscores that human vision is far more fluid and dynamic than digital resolution, as the brain processes visual input in a continuous, adaptive manner rather than capturing it in a pixel-based format.
Mindmap
Keywords
💡Resolution
💡Fovea
💡Megapixels
💡Blind spots
💡Spatial resolution
💡Neon color spreading illusion
💡Cinematic resolution
💡Photographic memory
💡Field of view
💡Top-down processing
Highlights
Michael introduces the location—The White House in Washington, D.C.
Discussion of global film industries, revealing India as the largest producer of feature films annually.
Insight into the differences between camera resolution and the resolution of the human eye.
Explanation of how pixel count alone doesn’t determine image quality—other factors like light and sensor size are crucial.
Demonstration of how resolution depends on spatial resolution—how different nearby pixels are from each other.
Introduction of the fovea, a small part of the retina responsible for high-detail vision.
Explanation of blind spots in human vision due to the optic nerve and how the brain compensates for them.
Clarification that the human eye does not function like a camera, as vision is a brain-generated, processed experience.
Roger N. Clark’s calculation that human vision can resolve up to 576 megapixels across a full field of view.
Only 7 megapixels are needed in the central two degrees of vision for optimal acuity, with about 1 megapixel for peripheral vision.
Reference to modern technologies like Apple’s Retina Displays, which already exceed human visual resolution.
Explanation of how human memory is not photographic and that no evidence exists for the existence of a truly photographic memory.
Contrast between cinematic narratives, which have clear beginnings and endings, and real-life, which is continuous and unresolved.
Comparison between the resolution of film narratives and the 'ear resolution' of real life, which lacks clear, cinematic closure.
Final philosophical reflection on how life doesn’t have a distinct end, but only a continuation—'the and.'
Transcripts
Hey, Vsauce. Michael here.
I am at the White House, in America's capital,
Washington, D.C. America makes a lot of feature films every year -
Hollywood. But they don't make the most feature films every year. Nigeria makes more.
But the country that makes the most films every single year is
India. Every two years, the country
of India fills up enough film with unique feature films
that stretch all the way from this city, Mumbai, to where I live,
in London. That's double what Hollywood produces in two years.
That is a lot of movies, but is
real-life a movie? I've discussed the frame rate of the human eye before but how
does the resolution
of the human eye compare to a camera or screen?
VHS, LaserDisc, DVD,
Blu-ray, IMAX. Numbers like these are pixel dimensions. When multiplied
they tell us the total number of picture elements an image is made up of.
A figure often used to describe digital cameras. It might sound like
more is better, but to be sure numbers like 1920 by 1080
are not resolutions per se. More pixels is only part
of the equation. Resolution is about distinguishing
fine details and that depends on a lot of other factors.
For instance, the amount of light, the size of the sensors,
what the millions of pixels are actually encoding and
how close the subject is. I mean, up close
Salvador Dali's painting of his wife looking at the Mediterranean can be
resolved into boxes. But from a far,
well, it's Abraham Lincoln. For crying out loud, on a small enough screen
from far enough away, low and high, so-called resolutions on screens, aren't
even resolved differently
from one another by your eye.
How different nearby pixels are from one another also matters. This is called
spatial resolution.
For instance, if I go out-of-focus
the number of pixels in the video frame stays the same but you can't resolve as much
detail. Now, with all this in mind we can still
compare human vision to a digital image, by asking a better question.
Assuming everything else is optimal, how many pixels would you need to make an
image on a screen large enough to fill your entire field of view
look like real life, without any detectable
pixelation? Now we are getting somewhere.
Kind of. The analogy is still crudy
because a camera snaps an entire frame at once, whereas
our eyes move around. The brain amalgamates
their constant stream of information into what we call vision -
sight. In fact, the image created by the eyeball alone during a single glance
would hardly even be acceptable on a broken TV screen. We think
our eyes create images like this picture Guy took of me with a camera.
But for one thing, unlike a camera, you've got some stuff
in the way. For instance, you are always
looking at your own nose, and maybe even glasses,
if you have them. Luckily, our brains process those stimuli out because they
don't matter
and they don't change. But thinking those are the only difference
is a pitfall, literally,
latinly. The fovea gets its name from the Latin for
'pitfall'. The fovea is the pit on your retina that receives light from the
central two degrees
of your field of view, about the area covered by both your thumbs
when held at arms length away. Optimal colour vision and 20/20 acuity are
only possible within that little area. When it comes to these limitations XKCD[.com]
has a brilliant illustration.
It points out other problems, like blind spots - literal blank spaces
in our vision where the optic nerve meets up with the retina
and no visual information is received. If you bought
a camera that did this, you would return it.
You can find your own blind spot by closing
your right eye, fixating your left eye on a point in front of you,
extending your left thumb and then moving it
left-of-center slightly slowly carefully until
it's not there anymore. Crazy(!) But, of course,
we don't see the world horribly, like this, because our eyes are constantly moving,
dragging foveal resolution wherever we need it.
And our brains' complex visual system fills in details,
merges images from both eyes and makes a lot of gueses. What we actually see
is a processed image. Not computer-generated imagery, but,
well, meat-generated imagery. The neon color spreading illusion
is a great way to demonstrate this difference. There is no
blue circle in this picture. The white here
is the same as the white here. A camera
isn't fooled, a screen isn't fooled, only
you and the fleeting gumbo of ingredients you call perception
is fooled. Our vision
is not analogous to a camera. But our reformulated question can still be
answered because human anatomy allows us to resolve, to differentiate certain
angular distances. Famously, Roger N .Clark
used a figure of 0.59 arcminutes as the resolution of the human eye to calculate,
based on the size of our total field of view,
how many of these distinct elements could fit inside of it.
The result was an approximation of exactly what we want to know:
how many individual picture elements - pixels - our vision can appreciate.
His answer? 576 megapixels.
That many pixels, packed inside a screen large enough to fill
your entire field of view, regardless of proximity,
would be close enough to be undetectable by the average
human eye. But we should factor in the fovea,
because Clark's calculation assumes optimal acuity everywhere, it allows the
eye to move around.
But a single glance is more analogous to a camera snap, and, as it turns out,
only about 7 megapixels, packed into the two degrees of
optimal acuity the fovea covers during a fixed stare,
are needed to be rendered undetectable. It's been roughly estimated that the
rest of your field of view would only need about
1 megapixel more information. Now that might sound low but keep in mind that there
are plenty of modern technologies that already use pixel densities
better than we can differentiate. As Bad Astronomer deftly showed,
Apple's Retina Displays truly do contain pixels at a density
average eyesight can't differentiate from typical
reading distances. But the fact that there are screen sizes and pixel
densities that can fool the human eye
is not a sign that we see in
any kind of megapixelly way. Human vision just
isn't that digital. I mean, sure, like a camera sensor we only have a finite
and discrete number of cells in our retina.
But the brain adjusts our initial sensations into a final perception
that is a wishy-washy top-down processed blob
of experience. It's not made of pixels
and furthermore, unlike a camera, it's not saved in memory with veracity like a
digital camera file.
Absolutely no evidence has ever been found for the existence of a truly
photographic memory. And what's even cooler is that not only do we not
visually resolve the real world, like a movie camera,
we also don't narratively resolve conflict and drama in our lives
like most movie scripts. The point of all of this, what I'm getting at,
is an idea. An idea that initially drew me to this question.
We play roles in the movie of life,
but it's a special kind of movie. Cinematic victories and struggles are often
discrete, resolved, like pixels, with unbelievably perfect beginnings and endings,
whereas the real world is all about ear resolution.
I like how Jack Angstreich put it in 'Cinemania'.
In a movie, a character can make a decision and then walk away from the camera
across the street and have the credits roll, freezing life in a perfect happily ever after.
But in the real world, after you cross the street,
you have to go home. The world goes on.
Life doesn't appear in any particular pixel resolution
or narrative resolution. Things are
continuous. The world was running before you came around and it will continue running
after you are gone. Your life is a plot only in so far as it begins
and ends and occurs in medias res.
Damerish opens illustration for Charles McGrath's endings
without endings says it perfectly. In life, there rarely is
the end. There is only
the and.
And as always,
thanks for watching.
5.0 / 5 (0 votes)