Registers and RAM: Crash Course Computer Science #6
Summary
TLDR在这个视频中,我们深入了解了计算机内存的工作原理。首先,通过逻辑门构建了一个简单的电路,能够存储单个比特的信息。然后,我们学习了如何通过组合这些电路来创建一个存储模块,最终与算术逻辑单元(ALU)结合,构建出我们自己的中央处理器(CPU)。视频中介绍了不同类型的内存,包括易失性随机存取内存(RAM)和持久性内存,并解释了它们的用途。我们探索了如何通过逻辑门和电路设计来实现数据的读写操作,以及如何通过增加抽象层次简化复杂的电路设计。最后,视频还展示了现代计算机如何通过不断打包内存模块来扩展到数百万兆字节的内存容量。
Takeaways
- 📘 计算机内存的作用是存储计算结果,以便进行连续的多个操作。
- 🔌 RAM(随机存取存储器)是易失性存储,需要电源来保持数据。
- 🔒 持久性存储器可以在断电后保存数据,用于不同的应用。
- 🛠️ 通过逻辑门的组合,可以构建一个简单的电路来存储单个比特的信息。
- 🔄 利用OR门和AND门的特性,可以创建能够记录0和1的电路。
- 🔩 AND-OR锁存器是将AND门和OR门结合起来,通过设置和复位输入来存储一个比特。
- 🔐 门控锁存器通过一个写使能线来控制数据的写入,简化了输入线的使用。
- 💾 通过将多个锁存器并排放置,可以形成一个寄存器,寄存器的宽度指的是其包含的比特数。
- 📊 通过矩阵布局,可以使用更少的连线来激活和选择特定的锁存器。
- 🏢 地址用于唯一指定内存中的特定位置,类似于城市中的街道地址。
- 🔄 多路复用器允许使用一个输入来选择多个输出中的一个,用于行和列的选择。
- 📦 现代计算机通过将内存模块打包成越来越大的组来扩展到兆字节和吉字节的内存。
- 📈 为了表示更大的内存地址,需要使用更多的比特,例如32位地址用于吉字节的内存。
- 🧠 RAM类似于人类的短期或工作记忆,用于追踪当前正在进行的事物。
- 🔗 存储器的不同类型(如SRAM、DRAM、Flash和NVRAM)使用不同的电路来存储比特信息。
Q & A
什么是ALU,它在计算机科学中的作用是什么?
-ALU,即算术逻辑单元,是计算机处理器中执行所有算术运算(如加法、减法)和逻辑运算(如AND、OR)的部件。它是计算机的中央处理单元(CPU)的核心部分,负责执行程序中的指令。
为什么我们需要计算机内存来存储计算结果?
-计算机内存用于存储计算结果和程序运行状态,这样即使在断电后也能保留数据。如果没有内存,每次计算后的结果都会丢失,导致无法连续执行多个操作或保存进度,这会极大地限制计算机的实用性。
RAM和持久性内存有什么区别?
-RAM(随机存取存储器)是一种易失性存储器,意味着它依赖电力来保持数据。当电源关闭时,RAM中的数据会丢失。而持久性内存能够在断电的情况下保持数据,因此它用于存储需要长期保留的信息。
如何构建一个能够存储单个比特信息的电路?
-可以通过创建一个带有反馈的逻辑电路来存储单个比特信息。例如,将一个OR门的输出反馈到其一个输入端,可以形成一个简单的锁存器,能够存储一个二进制值(0或1)。
什么是AND-OR锁存器,它是如何工作的?
-AND-OR锁存器是一种数字电路,它结合了AND门和OR门来存储一位信息。它有两个输入:一个用于设置输出为1的“设置”输入,和一个用于将输出重置为0的“重置”输入。如果设置和重置输入都为0,电路就会输出最后输入的值,即它记住了一个单一的比特信息。
门电路是如何通过添加额外的逻辑门变成门控锁存器的?
-通过添加额外的逻辑门,如AND门和OR门,可以创建一个门控锁存器。这种锁存器使用一个写使能线来控制数据是否可以写入,从而简化了输入,只需要一个数据线和写使能线即可。
什么是寄存器,它是如何组成的?
-寄存器是一组协同工作的锁存器,能够存储一个数字(如8位数字)。寄存器的宽度指的是它包含的位数。寄存器用于存储单个数值,并可以通过一个使能线来统一写入所有锁存器。
如何通过矩阵形式来减少连接大量锁存器所需的电线数量?
-通过将锁存器排列成行和列形成的矩阵,可以显著减少所需的电线数量。在这种矩阵中,通过激活对应的行线和列线来选择特定的锁存器。这样,可以使用单个共享的数据线和使能线来控制整个矩阵。
为什么计算机内存被称为随机存取存储器(RAM)?
-计算机内存被称为随机存取存储器(RAM),因为可以随机地访问任何内存位置,而不必按照特定的顺序。这意味着可以快速地读写任何位置的数据,为计算机提供了灵活性和效率。
现代计算机如何扩展到拥有数百万甚至数十亿字节的内存?
-现代计算机通过将小的内存模块打包成越来越大的组合来扩展内存。随着内存位置数量的增加,地址也必须增长,例如,使用32位地址可以提供对千兆字节(十亿字节)内存的寻址能力。
SRAM、DRAM、Flash和NVRAM这些不同类型的RAM有什么共同点和不同点?
-SRAM(静态随机存取存储器)、DRAM(动态随机存取存储器)、Flash和NVRAM(非易失性随机存取存储器)都是用于存储计算机中的信息的内存技术。它们的共同点是都使用大规模嵌套的存储单元矩阵来存储信息。不同之处在于它们用来存储单个比特的电路和组件不同,例如SRAM使用锁存器,DRAM使用电容器,而Flash和NVRAM使用不同的存储技术如电荷陷阱或膜变电阻器。
Outlines
📚 计算机内存的基础知识
本段介绍了计算机内存的重要性和基本概念。首先,讲解了ALU(算术逻辑单元)的功能,然后指出了存储计算结果的必要性。通过日常生活中的例子,如游戏中断导致的进度丢失,引出了RAM(随机存取存储器)的概念,并区分了RAM和持久性存储器。接着,通过构建一个简单的电路来存储单个比特信息,进而介绍了如何构建自己的内存模块,并计划在下一集中将其与ALU结合构建CPU。此外,还探讨了逻辑电路的流向,并通过实例展示了如何通过反馈机制创建循环电路,以及AND-OR Latch(与或锁存器)的工作原理。最后,介绍了Gated Latch(门控锁存器)的概念,并通过简化表示法将复杂的电路抽象为简单的存储单元。
🔍 内存的扩展与寻址
这段内容详细阐述了如何通过启用所有锁存器来向寄存器写入数据,并通过并行排列锁存器来构建更大的存储容量。为了减少所需的连线数量,引入了矩阵排列的概念,通过行和列的选择来激活特定的锁存器。此外,介绍了如何使用地址来唯一标识每个存储单元,并通过多路复用器来从地址选择正确的行或列。最后,通过将多个256位的存储组件排列成一行,构建了能够存储8位字节的内存组件,从而形成了一个具有256个地址、每个地址可读写8位值的内存系统。本段还讨论了现代计算机如何通过类似的方式扩展到兆字节和吉字节的内存,并且随着存储位置的增加,地址的大小也需要增长。
🧠 RAM的工作原理与类型
本段通过类比人类短期或工作记忆,进一步解释了RAM的作用,并展示了一个实际的RAM条及其内部结构。通过逐步放大,揭示了内存模块的层级结构,从32个存储块到128x64位的矩阵。通过计算,说明了1980年代的RAM模块的存储容量,以及现代RAM模块的存储容量。此外,还区分了SRAM(静态随机存取存储器)和其他类型的RAM,如DRAM、闪存和NVRAM,指出虽然它们使用不同的电路来存储比特,但基本原理相同,都是通过大量嵌套的存储单元矩阵来存储信息。最后,强调了计算中基础操作的简单性以及抽象层次的复杂性。
Mindmap
Keywords
💡逻辑门
💡算术逻辑单元(ALU)
💡随机存取存储器(RAM)
💡持久性存储
💡位
💡与门(AND gate)
💡或门(OR gate)
💡AND-OR锁存器
💡门控锁存器(Gated Latch)
💡寄存器
💡矩阵存储
Highlights
使用逻辑门构建了简单的算术逻辑单元(ALU),用于执行算术和逻辑运算。
计算机内存的作用是存储计算结果,以便可以连续执行多个操作。
随机存取存储器(RAM)在电源开启时存储游戏状态等信息,但断电后数据会丢失。
持久性存储器能够在断电后依然保留数据,用于不同的应用场景。
通过构建一个电路来存储单个比特的信息,开始介绍内存模块的构建。
展示了通过将普通或门的输出反馈到输入之一来创建循环电路的方法。
通过AND门和OR门的组合,创建了能够记录0和1的电路。
AND-OR锁存器的设计,它有两个输入:设置输入和复位输入,能够记忆单个比特的信息。
介绍了门控锁存器,通过添加额外的逻辑门,使得数据输入更加方便。
通过将8个锁存器并排放置,可以存储8比特的信息,称为寄存器。
介绍了矩阵方法,通过行和列的交叉选择来激活任何一个锁存器。
使用多路选择器(multiplexer)来从地址中选择正确的行或列。
构建了一个256位的内存组件,它接受8位地址输入,具有读写使能线和单个数据线。
通过将8个256位的内存组件排成一行,可以存储一个字节,即8位数字。
现代计算机通过不断打包小的内存模块成更大的组合来扩展到兆字节和吉字节的内存。
随机访问存储器(RAM)允许我们随机访问任何内存位置,以任意顺序。
RAM类似于人类的短期或工作记忆,用于追踪当前正在进行的事物。
介绍了静态随机访问存储器(SRAM)以及其他类型的RAM,如DRAM、闪存和NVRAM。
所有这些技术都使用大规模嵌套的存储单元矩阵来存储信息比特。
Transcripts
Hi, I’m Carrie Anne and welcome to Crash Course Computer Science.
So last episode, using just logic gates, we built a simple ALU, which performs arithmetic
and logic operations, hence the ‘A’ and the ‘L’.
But of course, there’s not much point in calculating a result only to throw it away
- it would be useful to store that value somehow, and maybe even run several operations in a row.
That's where computer memory comes in!
If you've ever been in the middle of a long RPG campaign on your console, or slogging
through a difficult level on Minesweeper on your desktop, and your dog came by, tripped
and pulled the power cord out of the wall, you know the agony of losing all your progress.
Condolences.
But the reason for your loss is that your console, your laptop and your computers make
use of Random Access Memory, or RAM, which stores things like game state - as long as
the power stays on.
Another type of memory, called persistent memory, can survive without power, and it’s
used for different things; We'll talk about the persistence of memory in a later episode.
Today, we’re going to start small - literally by building a circuit that can store one..
single.. bit of information.
After that, we’ll scale up, and build our very own memory module, and we’ll combine
it with our ALU next time, when we finally build our very own CPU!
INTRO
All of the logic circuits we've discussed so far go in one direction - always flowing
forward - like our 8-bit ripple adder from last episode.
But we can also create circuits that loop back on themselves.
Let’s try taking an ordinary OR gate, and feed the output back into one of its inputs
and see what happens.
First, let’s set both inputs to 0.
So 0 OR 0 is 0, and so this circuit always outputs 0.
If we were to flip input A to 1.
1 OR 0 is 1, so now the output of the OR gate is 1.
A fraction of a second later, that loops back around into input B, so the OR gate sees that
both of its inputs are now 1.
1 OR 1 is still 1, so there is no change in output.
If we flip input A back to 0, the OR gate still outputs 1.
So now we've got a circuit that records a “1” for us.
Except, we've got a teensy tiny problem - this change is permanent!
No matter how hard we try, there’s no way to get this circuit to flip back from a 1
to a 0.
Now let’s look at this same circuit, but with an AND gate instead.
We'll start inputs A and B both at 1.
1 AND 1 outputs 1 forever.
But, if we then flip input A to 0, because it’s an AND gate, the output will go to 0.
So this circuit records a 0, the opposite of our other circuit.
Like before, no matter what input we apply to input A afterwards, the circuit will always output 0.
Now we’ve got circuits that can record both 0s and 1s.
The key to making this a useful piece of memory is to combine our two circuits into what is
called the AND-OR Latch.
It has two inputs, a "set" input, which sets the output to a 1, and a "reset" input, which
resets the output to a 0.
If set and reset are both 0, the circuit just outputs whatever was last put in it.
In other words, it remembers a single bit of information!
Memory!
This is called a “latch” because it “latches onto” a particular value and stays that way.
The action of putting data into memory is called writing, whereas getting the data out
is called reading.
Ok, so we’ve got a way to store a single bit of information!
Great!
Unfortunately, having two different wires for input – set and reset – is a bit confusing.
To make this a little easier to use, we really want a single wire to input data, that we
can set to either 0 or 1 to store the value.
Additionally, we are going to need a wire that enables the memory to be either available
for writing or “locked” down --which is called the write enable line.
By adding a few extra logic gates, we can build this circuit, which is called a Gated Latch
since the “gate” can be opened or closed.
Now this circuit is starting to get a little complicated.
We don’t want to have to deal with all the individual logic gates... so as before, we’re
going to bump up a level of abstraction, and put our whole Gated Latch circuit in a box
-- a box that stores one bit.
Let’s test out our new component!
Let’s start everything at 0.
If we toggle the Data wire from 0 to 1 or 1 to 0, nothing happens - the output stays at 0.
That’s because the write enable wire is off, which prevents any change to the memory.
So we need to “open” the “gate” by turning the write enable wire to 1.
Now we can put a 1 on the data line to save the value 1 to our latch.
Notice how the output is now 1.
Success!
We can turn off the enable line and the output stays as 1.
Once again, we can toggle the value on the data line all we want, but the output will
stay the same.
The value is saved in memory.
Now let’s turn the enable line on again use our data line to set the latch to 0.
Done.
Enable line off, and the output is 0.
And it works!
Now, of course, computer memory that only stores one bit of information isn’t very
useful -- definitely not enough to run Frogger.
Or anything, really.
But we’re not limited to using only one latch.
If we put 8 latches side-by-side, we can store 8 bits of information like an 8-bit number.
A group of latches operating like this is called a register, which holds a single number,
and the number of bits in a register is called its width.
Early computers had 8-bit registers, then 16, 32, and today, many computers have registers
that are 64-bits wide.
To write to our register, we first have to enable all of the latches.
We can do this with a single wire that connects to all of their enable inputs, which we set to 1.
We then send our data in using the 8 data wires, and then set enable back to 0, and
the 8 bit value is now saved in memory.
Putting latches side-by-side works ok for a small-ish number of bits.
A 64-bit register would need 64 wires running to the data pins, and 64 wires running to
the outputs.
Luckily we only need 1 wire to enable all the latches, but that’s still 129 wires.
For 256 bits, we end up with 513 wires!
The solution is a matrix!
In this matrix, we don’t arrange our latches in a row, we put them in a grid.
For 256 bits, we need a 16 by 16 grid of latches with 16 rows and columns of wires.
To activate any one latch, we must turn on the corresponding row AND column wire.
Let’s zoom in and see how this works.
We only want the latch at the intersection of the two active wires to be enabled,
but all of the other latches should stay disabled.
For this, we can use our trusty AND gate!
The AND gate will output a 1 only if the row and the column wires are both 1.
So we can use this signal to uniquely select a single latch.
This row/column setup connects all our latches with a single, shared, write enable wire.
In order for a latch to become write enabled, the row wire, the column wire, and the write
enable wire must all be 1.
That should only ever be true for one single latch at any given time.
This means we can use a single, shared wire for data.
Because only one latch will ever be write enabled, only one will ever save the data
-- the rest of the latches will simply ignore values on the data wire because they are not
write enabled.
We can use the same trick with a read enable wire to read the data later, to get the data
out of one specific latch.
This means in total, for 256 bits of memory, we only need 35 wires - 1 data wire, 1 write
enable wire, 1 read enable wire, and 16 rows and columns for the selection.
That’s significant wire savings!
But we need a way to uniquely specify each intersection.
We can think of this like a city, where you might want to meet someone at 12th avenue
and 8th street -- that's an address that defines an intersection.
The latch we just saved our one bit into has an address of row 12 and column 8.
Since there is a maximum of 16 rows, we store the row address in a 4 bit number.
12 is 1100 in binary.
We can do the same for the column address: 8 is 1000 in binary.
So the address for the particular latch we just used can be written as 11001000.
To convert from an address into something that selects the right row or column, we need
a special component called a multiplexer -- which is the computer component with a pretty cool
name at least compared to the ALU.
Multiplexers come in all different sizes, but because we have 16 rows, we need a 1 to
16 multiplexer.
It works like this.
You feed it a 4 bit number, and it connects the input line to a corresponding output line.
So if we pass in 0000, it will select the very first column for us.
If we pass in 0001, the next column is selected, and so on.
We need one multiplexer to handle our rows and another multiplexer to handle the columns.
Ok, it’s starting to get complicated again, so let’s make our 256-bit memory its own component.
Once again a new level of abstraction!
It takes an 8-bit address for input - the 4 bits for the column and 4 for the row.
We also need write and read enable wires.
And finally, we need just one data wire, which can be used to read or write data.
Unfortunately, even 256-bits of memory isn’t enough to run much of anything, so we need
to scale up even more!
We’re going to put them in a row.
Just like with the registers.
We’ll make a row of 8 of them, so we can store an 8 bit number - also known as a byte.
To do this, we feed the exact same address into all 8 of our 256-bit memory components
at the same time, and each one saves one bit of the number.
That means the component we just made can store 256 bytes at 256 different addresses.
Again, to keep things simple, we want to leave behind this inner complexity.
Instead of thinking of this as a series of individual memory modules and circuits, we’ll
think of it as a uniform bank of addressable memory.
We have 256 addresses, and at each address, we can read or write an 8-bit value.
We’re going to use this memory component next episode when we build our CPU.
The way that modern computers scale to megabytes and gigabytes of memory is by doing the same
thing we’ve been doing here -- keep packaging up little bundles of memory into larger, and
larger, and larger arrangements.
As the number of memory locations grow, our addresses have to grow as well.
8 bits hold enough numbers to provide addresses for 256 bytes of our memory, but that’s all.
To address a gigabyte – or a billion bytes of memory – we need 32-bit addresses.
An important property of this memory is that we can access any memory location, at any
time, and in a random order.
For this reason, it’s called Random-Access Memory or RAM.
When you hear people talking about how much RAM a computer has - that's the computer’s memory.
RAM is like a human’s short term or working memory, where you keep track of things going
on right now - like whether or not you had lunch or paid your phone bill.
Here’s an actual stick of RAM - with 8 memory modules soldered onto the board.
If we carefully opened up one of these modules and zoomed in, The first thing you would see
are 32 squares of memory.
Zoom into one of those squares, and we can see each one is comprised of 4 smaller blocks.
If we zoom in again, we get down to the matrix of individual bits.
This is a matrix of 128 by 64 bits.
That’s 8192 bits in total.
Each of our 32 squares has 4 matrices, so that’s 32 thousand, 7 hundred and 68 bits.
And there are 32 squares in total.
So all in all, that’s roughly 1 million bits of memory in each chip.
Our RAM stick has 8 of these chips, so in total, this RAM can store 8 millions bits,
otherwise known as 1 megabyte.
That’s not a lot of memory these days -- this is a RAM module from the 1980’s.
Today you can buy RAM that has a gigabyte or more of memory - that’s billions of bytes
of memory.
So, today, we built a piece of SRAM - Static Random-Access Memory – which uses latches.
There are other types of RAM, such as DRAM, Flash memory, and NVRAM.
These are very similar in function to SRAM, but use different circuits to store the individual
bits -- for example, using different logic gates, capacitors, charge traps, or memristors.
But fundamentally, all of these technologies store bits of information in massively nested
matrices of memory cells.
Like many things in computing, the fundamental operation is relatively simple.. it’s the
layers and layers of abstraction that’s mind blowing -- like a russian doll that
keeps getting smaller and smaller and smaller.
I’ll see you next week.
Credits
浏览更多相关视频
How Computers Calculate - the ALU: Crash Course Computer Science #5
The Central Processing Unit (CPU): Crash Course Computer Science #7
How do SSDs Work? | How does your Smartphone store data? | Insanely Complex Nanoscopic Structures!
Simple Inventory (PART 1: Adding Items to Player Inventory)
Procedural Generation: Programming The Universe
Screens & 2D Graphics: Crash Course Computer Science #23
5.0 / 5 (0 votes)