Registers and RAM: Crash Course Computer Science #6

CrashCourse
29 Mar 201712:16

Summary

TLDR在这个视频中,我们深入了解了计算机内存的工作原理。首先,通过逻辑门构建了一个简单的电路,能够存储单个比特的信息。然后,我们学习了如何通过组合这些电路来创建一个存储模块,最终与算术逻辑单元(ALU)结合,构建出我们自己的中央处理器(CPU)。视频中介绍了不同类型的内存,包括易失性随机存取内存(RAM)和持久性内存,并解释了它们的用途。我们探索了如何通过逻辑门和电路设计来实现数据的读写操作,以及如何通过增加抽象层次简化复杂的电路设计。最后,视频还展示了现代计算机如何通过不断打包内存模块来扩展到数百万兆字节的内存容量。

Takeaways

  • 📘 计算机内存的作用是存储计算结果,以便进行连续的多个操作。
  • 🔌 RAM(随机存取存储器)是易失性存储,需要电源来保持数据。
  • 🔒 持久性存储器可以在断电后保存数据,用于不同的应用。
  • 🛠️ 通过逻辑门的组合,可以构建一个简单的电路来存储单个比特的信息。
  • 🔄 利用OR门和AND门的特性,可以创建能够记录0和1的电路。
  • 🔩 AND-OR锁存器是将AND门和OR门结合起来,通过设置和复位输入来存储一个比特。
  • 🔐 门控锁存器通过一个写使能线来控制数据的写入,简化了输入线的使用。
  • 💾 通过将多个锁存器并排放置,可以形成一个寄存器,寄存器的宽度指的是其包含的比特数。
  • 📊 通过矩阵布局,可以使用更少的连线来激活和选择特定的锁存器。
  • 🏢 地址用于唯一指定内存中的特定位置,类似于城市中的街道地址。
  • 🔄 多路复用器允许使用一个输入来选择多个输出中的一个,用于行和列的选择。
  • 📦 现代计算机通过将内存模块打包成越来越大的组来扩展到兆字节和吉字节的内存。
  • 📈 为了表示更大的内存地址,需要使用更多的比特,例如32位地址用于吉字节的内存。
  • 🧠 RAM类似于人类的短期或工作记忆,用于追踪当前正在进行的事物。
  • 🔗 存储器的不同类型(如SRAM、DRAM、Flash和NVRAM)使用不同的电路来存储比特信息。

Q & A

  • 什么是ALU,它在计算机科学中的作用是什么?

    -ALU,即算术逻辑单元,是计算机处理器中执行所有算术运算(如加法、减法)和逻辑运算(如AND、OR)的部件。它是计算机的中央处理单元(CPU)的核心部分,负责执行程序中的指令。

  • 为什么我们需要计算机内存来存储计算结果?

    -计算机内存用于存储计算结果和程序运行状态,这样即使在断电后也能保留数据。如果没有内存,每次计算后的结果都会丢失,导致无法连续执行多个操作或保存进度,这会极大地限制计算机的实用性。

  • RAM和持久性内存有什么区别?

    -RAM(随机存取存储器)是一种易失性存储器,意味着它依赖电力来保持数据。当电源关闭时,RAM中的数据会丢失。而持久性内存能够在断电的情况下保持数据,因此它用于存储需要长期保留的信息。

  • 如何构建一个能够存储单个比特信息的电路?

    -可以通过创建一个带有反馈的逻辑电路来存储单个比特信息。例如,将一个OR门的输出反馈到其一个输入端,可以形成一个简单的锁存器,能够存储一个二进制值(0或1)。

  • 什么是AND-OR锁存器,它是如何工作的?

    -AND-OR锁存器是一种数字电路,它结合了AND门和OR门来存储一位信息。它有两个输入:一个用于设置输出为1的“设置”输入,和一个用于将输出重置为0的“重置”输入。如果设置和重置输入都为0,电路就会输出最后输入的值,即它记住了一个单一的比特信息。

  • 门电路是如何通过添加额外的逻辑门变成门控锁存器的?

    -通过添加额外的逻辑门,如AND门和OR门,可以创建一个门控锁存器。这种锁存器使用一个写使能线来控制数据是否可以写入,从而简化了输入,只需要一个数据线和写使能线即可。

  • 什么是寄存器,它是如何组成的?

    -寄存器是一组协同工作的锁存器,能够存储一个数字(如8位数字)。寄存器的宽度指的是它包含的位数。寄存器用于存储单个数值,并可以通过一个使能线来统一写入所有锁存器。

  • 如何通过矩阵形式来减少连接大量锁存器所需的电线数量?

    -通过将锁存器排列成行和列形成的矩阵,可以显著减少所需的电线数量。在这种矩阵中,通过激活对应的行线和列线来选择特定的锁存器。这样,可以使用单个共享的数据线和使能线来控制整个矩阵。

  • 为什么计算机内存被称为随机存取存储器(RAM)?

    -计算机内存被称为随机存取存储器(RAM),因为可以随机地访问任何内存位置,而不必按照特定的顺序。这意味着可以快速地读写任何位置的数据,为计算机提供了灵活性和效率。

  • 现代计算机如何扩展到拥有数百万甚至数十亿字节的内存?

    -现代计算机通过将小的内存模块打包成越来越大的组合来扩展内存。随着内存位置数量的增加,地址也必须增长,例如,使用32位地址可以提供对千兆字节(十亿字节)内存的寻址能力。

  • SRAM、DRAM、Flash和NVRAM这些不同类型的RAM有什么共同点和不同点?

    -SRAM(静态随机存取存储器)、DRAM(动态随机存取存储器)、Flash和NVRAM(非易失性随机存取存储器)都是用于存储计算机中的信息的内存技术。它们的共同点是都使用大规模嵌套的存储单元矩阵来存储信息。不同之处在于它们用来存储单个比特的电路和组件不同,例如SRAM使用锁存器,DRAM使用电容器,而Flash和NVRAM使用不同的存储技术如电荷陷阱或膜变电阻器。

Outlines

00:00

📚 计算机内存的基础知识

本段介绍了计算机内存的重要性和基本概念。首先,讲解了ALU(算术逻辑单元)的功能,然后指出了存储计算结果的必要性。通过日常生活中的例子,如游戏中断导致的进度丢失,引出了RAM(随机存取存储器)的概念,并区分了RAM和持久性存储器。接着,通过构建一个简单的电路来存储单个比特信息,进而介绍了如何构建自己的内存模块,并计划在下一集中将其与ALU结合构建CPU。此外,还探讨了逻辑电路的流向,并通过实例展示了如何通过反馈机制创建循环电路,以及AND-OR Latch(与或锁存器)的工作原理。最后,介绍了Gated Latch(门控锁存器)的概念,并通过简化表示法将复杂的电路抽象为简单的存储单元。

05:02

🔍 内存的扩展与寻址

这段内容详细阐述了如何通过启用所有锁存器来向寄存器写入数据,并通过并行排列锁存器来构建更大的存储容量。为了减少所需的连线数量,引入了矩阵排列的概念,通过行和列的选择来激活特定的锁存器。此外,介绍了如何使用地址来唯一标识每个存储单元,并通过多路复用器来从地址选择正确的行或列。最后,通过将多个256位的存储组件排列成一行,构建了能够存储8位字节的内存组件,从而形成了一个具有256个地址、每个地址可读写8位值的内存系统。本段还讨论了现代计算机如何通过类似的方式扩展到兆字节和吉字节的内存,并且随着存储位置的增加,地址的大小也需要增长。

10:03

🧠 RAM的工作原理与类型

本段通过类比人类短期或工作记忆,进一步解释了RAM的作用,并展示了一个实际的RAM条及其内部结构。通过逐步放大,揭示了内存模块的层级结构,从32个存储块到128x64位的矩阵。通过计算,说明了1980年代的RAM模块的存储容量,以及现代RAM模块的存储容量。此外,还区分了SRAM(静态随机存取存储器)和其他类型的RAM,如DRAM、闪存和NVRAM,指出虽然它们使用不同的电路来存储比特,但基本原理相同,都是通过大量嵌套的存储单元矩阵来存储信息。最后,强调了计算中基础操作的简单性以及抽象层次的复杂性。

Mindmap

Keywords

💡逻辑门

逻辑门是实现基本逻辑运算的电子设备,如与(AND)、或(OR)、非(NOT)等。在视频中,逻辑门被用来构建简单的算术逻辑单元(ALU),这是计算机科学的基础。逻辑门的组合和使用展示了如何通过电子方式实现逻辑运算和数据存储。

💡算术逻辑单元(ALU)

ALU是计算机处理器中执行算术和逻辑运算的部分。视频中提到,ALU可以执行加法、减法、逻辑运算等操作,是构建更复杂计算系统的基础。

💡随机存取存储器(RAM)

RAM是一种计算机存储技术,它允许数据的快速读写。在视频中,RAM被用来存储游戏状态等信息,但它依赖于持续供电来保持数据。RAM对于临时数据存储至关重要,因为它允许快速访问和修改数据。

💡持久性存储

与RAM不同,持久性存储即使在断电后也能保持数据。视频中提到,持久性存储用于不同的应用场景,并且会在后续的视频中讨论。持久性存储对于需要长期保存数据的系统非常重要。

💡

位(bit)是计算机科学中的基本信息单位,代表一个二进制值,即0或1。视频中讨论了如何构建一个电路来存储单个位的信息,这是构建更复杂存储系统的基础。

💡与门(AND gate)

与门是一种逻辑门,只有当所有输入都为1时,输出才为1。在视频中,与门被用来构建能够记录0的电路,这是构建AND-OR锁存器的关键部分。

💡或门(OR gate)

或门是另一种逻辑门,只要有一个输入为1,输出就为1。视频中使用或门创建了一个能够记录1的电路,这是实现记忆功能的基础。

💡AND-OR锁存器

AND-OR锁存器是一种数字电路,能够存储一位信息。它包含两个输入:一个设置(set)输入和一个复位(reset)输入。视频中解释了如何通过结合与门和或门来创建锁存器,这是构建更复杂存储系统的关键步骤。

💡门控锁存器(Gated Latch)

门控锁存器是一种改进的锁存器,它通过一个额外的写使能(write enable)线来控制数据的写入。视频中提到,通过添加额外的逻辑门,可以实现一个使用单个数据线的更易于使用的存储电路。

💡寄存器

寄存器是一组锁存器的集合,能够存储一个数字。视频中提到,通过将8个锁存器并排放置,可以存储一个8位的数字,即一个寄存器。寄存器的宽度指的是它包含的位数,它对于存储和操作数据非常重要。

💡矩阵存储

矩阵存储是一种将锁存器排列成网格以减少所需连线数量的方法。视频中解释了如何使用16x16的网格来存储256位的信息,这种方法显著减少了所需的连线数量,提高了存储密度。

Highlights

使用逻辑门构建了简单的算术逻辑单元(ALU),用于执行算术和逻辑运算。

计算机内存的作用是存储计算结果,以便可以连续执行多个操作。

随机存取存储器(RAM)在电源开启时存储游戏状态等信息,但断电后数据会丢失。

持久性存储器能够在断电后依然保留数据,用于不同的应用场景。

通过构建一个电路来存储单个比特的信息,开始介绍内存模块的构建。

展示了通过将普通或门的输出反馈到输入之一来创建循环电路的方法。

通过AND门和OR门的组合,创建了能够记录0和1的电路。

AND-OR锁存器的设计,它有两个输入:设置输入和复位输入,能够记忆单个比特的信息。

介绍了门控锁存器,通过添加额外的逻辑门,使得数据输入更加方便。

通过将8个锁存器并排放置,可以存储8比特的信息,称为寄存器。

介绍了矩阵方法,通过行和列的交叉选择来激活任何一个锁存器。

使用多路选择器(multiplexer)来从地址中选择正确的行或列。

构建了一个256位的内存组件,它接受8位地址输入,具有读写使能线和单个数据线。

通过将8个256位的内存组件排成一行,可以存储一个字节,即8位数字。

现代计算机通过不断打包小的内存模块成更大的组合来扩展到兆字节和吉字节的内存。

随机访问存储器(RAM)允许我们随机访问任何内存位置,以任意顺序。

RAM类似于人类的短期或工作记忆,用于追踪当前正在进行的事物。

介绍了静态随机访问存储器(SRAM)以及其他类型的RAM,如DRAM、闪存和NVRAM。

所有这些技术都使用大规模嵌套的存储单元矩阵来存储信息比特。

Transcripts

play00:03

Hi, I’m Carrie Anne and welcome to Crash Course Computer Science.

play00:05

So last episode, using just logic gates, we built a simple ALU, which performs arithmetic

play00:11

and logic operations, hence the ‘A’ and the ‘L’.

play00:13

But of course, there’s not much point in calculating a result only to throw it away

play00:17

- it would be useful to store that value somehow, and maybe even run several operations in a row.

play00:22

That's where computer memory comes in!

play00:24

If you've ever been in the middle of a long RPG campaign on your console, or slogging

play00:28

through a difficult level on Minesweeper on your desktop, and your dog came by, tripped

play00:32

and pulled the power cord out of the wall, you know the agony of losing all your progress.

play00:36

Condolences.

play00:38

But the reason for your loss is that your console, your laptop and your computers make

play00:42

use of Random Access Memory, or RAM, which stores things like game state - as long as

play00:46

the power stays on.

play00:47

Another type of memory, called persistent memory, can survive without power, and it’s

play00:51

used for different things; We'll talk about the persistence of memory in a later episode.

play00:55

Today, we’re going to start small - literally by building a circuit that can store one..

play01:00

single.. bit of information.

play01:01

After that, we’ll scale up, and build our very own memory module, and we’ll combine

play01:05

it with our ALU next time, when we finally build our very own CPU!

play01:10

INTRO

play01:19

All of the logic circuits we've discussed so far go in one direction - always flowing

play01:23

forward - like our 8-bit ripple adder from last episode.

play01:26

But we can also create circuits that loop back on themselves.

play01:29

Let’s try taking an ordinary OR gate, and feed the output back into one of its inputs

play01:34

and see what happens.

play01:35

First, let’s set both inputs to 0.

play01:37

So 0 OR 0 is 0, and so this circuit always outputs 0.

play01:41

If we were to flip input A to 1.

play01:44

1 OR 0 is 1, so now the output of the OR gate is 1.

play01:48

A fraction of a second later, that loops back around into input B, so the OR gate sees that

play01:52

both of its inputs are now 1.

play01:54

1 OR 1 is still 1, so there is no change in output.

play01:58

If we flip input A back to 0, the OR gate still outputs 1.

play02:01

So now we've got a circuit that records a “1” for us.

play02:04

Except, we've got a teensy tiny problem - this change is permanent!

play02:07

No matter how hard we try, there’s no way to get this circuit to flip back from a 1

play02:12

to a 0.

play02:13

Now let’s look at this same circuit, but with an AND gate instead.

play02:16

We'll start inputs A and B both at 1.

play02:19

1 AND 1 outputs 1 forever.

play02:21

But, if we then flip input A to 0, because it’s an AND gate, the output will go to 0.

play02:26

So this circuit records a 0, the opposite of our other circuit.

play02:29

Like before, no matter what input we apply to input A afterwards, the circuit will always output 0.

play02:34

Now we’ve got circuits that can record both 0s and 1s.

play02:38

The key to making this a useful piece of memory is to combine our two circuits into what is

play02:42

called the AND-OR Latch.

play02:44

It has two inputs, a "set" input, which sets the output to a 1, and a "reset" input, which

play02:48

resets the output to a 0.

play02:50

If set and reset are both 0, the circuit just outputs whatever was last put in it.

play02:54

In other words, it remembers a single bit of information!

play02:58

Memory!

play02:59

This is called a “latch” because it “latches onto” a particular value and stays that way.

play03:03

The action of putting data into memory is called writing, whereas getting the data out

play03:08

is called reading.

play03:09

Ok, so we’ve got a way to store a single bit of information!

play03:12

Great!

play03:13

Unfortunately, having two different wires for input – set and reset – is a bit confusing.

play03:18

To make this a little easier to use, we really want a single wire to input data, that we

play03:22

can set to either 0 or 1 to store the value.

play03:24

Additionally, we are going to need a wire that enables the memory to be either available

play03:28

for writing or “locked” down --which is called the write enable line.

play03:32

By adding a few extra logic gates, we can build this circuit, which is called a Gated Latch

play03:37

since the “gate” can be opened or closed.

play03:39

Now this circuit is starting to get a little complicated.

play03:41

We don’t want to have to deal with all the individual logic gates... so as before, we’re

play03:44

going to bump up a level of abstraction, and put our whole Gated Latch circuit in a box

play03:48

-- a box that stores one bit.

play03:50

Let’s test out our new component!

play03:52

Let’s start everything at 0.

play03:54

If we toggle the Data wire from 0 to 1 or 1 to 0, nothing happens - the output stays at 0.

play04:00

That’s because the write enable wire is off, which prevents any change to the memory.

play04:04

So we need to “open” the “gate” by turning the write enable wire to 1.

play04:07

Now we can put a 1 on the data line to save the value 1 to our latch.

play04:11

Notice how the output is now 1.

play04:14

Success!

play04:14

We can turn off the enable line and the output stays as 1.

play04:18

Once again, we can toggle the value on the data line all we want, but the output will

play04:21

stay the same.

play04:22

The value is saved in memory.

play04:24

Now let’s turn the enable line on again use our data line to set the latch to 0.

play04:29

Done.

play04:30

Enable line off, and the output is 0.

play04:32

And it works!

play04:33

Now, of course, computer memory that only stores one bit of information isn’t very

play04:37

useful -- definitely not enough to run Frogger.

play04:39

Or anything, really.

play04:41

But we’re not limited to using only one latch.

play04:43

If we put 8 latches side-by-side, we can store 8 bits of information like an 8-bit number.

play04:48

A group of latches operating like this is called a register, which holds a single number,

play04:53

and the number of bits in a register is called its width.

play04:56

Early computers had 8-bit registers, then 16, 32, and today, many computers have registers

play05:01

that are 64-bits wide.

play05:03

To write to our register, we first have to enable all of the latches.

play05:06

We can do this with a single wire that connects to all of their enable inputs, which we set to 1.

play05:11

We then send our data in using the 8 data wires, and then set enable back to 0, and

play05:17

the 8 bit value is now saved in memory.

play05:19

Putting latches side-by-side works ok for a small-ish number of bits.

play05:23

A 64-bit register would need 64 wires running to the data pins, and 64 wires running to

play05:28

the outputs.

play05:29

Luckily we only need 1 wire to enable all the latches, but that’s still 129 wires.

play05:36

For 256 bits, we end up with 513 wires!

play05:40

The solution is a matrix!

play05:42

In this matrix, we don’t arrange our latches in a row, we put them in a grid.

play05:46

For 256 bits, we need a 16 by 16 grid of latches with 16 rows and columns of wires.

play05:52

To activate any one latch, we must turn on the corresponding row AND column wire.

play05:56

Let’s zoom in and see how this works.

play05:58

We only want the latch at the intersection of the two active wires to be enabled,

play06:02

but all of the other latches should stay disabled.

play06:05

For this, we can use our trusty AND gate!

play06:08

The AND gate will output a 1 only if the row and the column wires are both 1.

play06:12

So we can use this signal to uniquely select a single latch.

play06:15

This row/column setup connects all our latches with a single, shared, write enable wire.

play06:20

In order for a latch to become write enabled, the row wire, the column wire, and the write

play06:24

enable wire must all be 1.

play06:26

That should only ever be true for one single latch at any given time.

play06:29

This means we can use a single, shared wire for data.

play06:32

Because only one latch will ever be write enabled, only one will ever save the data

play06:37

-- the rest of the latches will simply ignore values on the data wire because they are not

play06:40

write enabled.

play06:41

We can use the same trick with a read enable wire to read the data later, to get the data

play06:46

out of one specific latch.

play06:48

This means in total, for 256 bits of memory, we only need 35 wires - 1 data wire, 1 write

play06:55

enable wire, 1 read enable wire, and 16 rows and columns for the selection.

play06:59

That’s significant wire savings!

play07:01

But we need a way to uniquely specify each intersection.

play07:05

We can think of this like a city, where you might want to meet someone at 12th avenue

play07:08

and 8th street -- that's an address that defines an intersection.

play07:11

The latch we just saved our one bit into has an address of row 12 and column 8.

play07:15

Since there is a maximum of 16 rows, we store the row address in a 4 bit number.

play07:20

12 is 1100 in binary.

play07:23

We can do the same for the column address: 8 is 1000 in binary.

play07:28

So the address for the particular latch we just used can be written as 11001000.

play07:35

To convert from an address into something that selects the right row or column, we need

play07:39

a special component called a multiplexer -- which is the computer component with a pretty cool

play07:43

name at least compared to the ALU.

play07:45

Multiplexers come in all different sizes, but because we have 16 rows, we need a 1 to

play07:50

16 multiplexer.

play07:51

It works like this.

play07:52

You feed it a 4 bit number, and it connects the input line to a corresponding output line.

play07:56

So if we pass in 0000, it will select the very first column for us.

play08:02

If we pass in 0001, the next column is selected, and so on.

play08:06

We need one multiplexer to handle our rows and another multiplexer to handle the columns.

play08:10

Ok, it’s starting to get complicated again, so let’s make our 256-bit memory its own component.

play08:16

Once again a new level of abstraction!

play08:24

It takes an 8-bit address for input - the 4 bits for the column and 4 for the row.

play08:29

We also need write and read enable wires.

play08:32

And finally, we need just one data wire, which can be used to read or write data.

play08:37

Unfortunately, even 256-bits of memory isn’t enough to run much of anything, so we need

play08:42

to scale up even more!

play08:43

We’re going to put them in a row.

play08:45

Just like with the registers.

play08:46

We’ll make a row of 8 of them, so we can store an 8 bit number - also known as a byte.

play08:51

To do this, we feed the exact same address into all 8 of our 256-bit memory components

play08:57

at the same time, and each one saves one bit of the number.

play09:01

That means the component we just made can store 256 bytes at 256 different addresses.

play09:07

Again, to keep things simple, we want to leave behind this inner complexity.

play09:11

Instead of thinking of this as a series of individual memory modules and circuits, we’ll

play09:15

think of it as a uniform bank of addressable memory.

play09:18

We have 256 addresses, and at each address, we can read or write an 8-bit value.

play09:23

We’re going to use this memory component next episode when we build our CPU.

play09:28

The way that modern computers scale to megabytes and gigabytes of memory is by doing the same

play09:32

thing we’ve been doing here -- keep packaging up little bundles of memory into larger, and

play09:36

larger, and larger arrangements.

play09:37

As the number of memory locations grow, our addresses have to grow as well.

play09:42

8 bits hold enough numbers to provide addresses for 256 bytes of our memory, but that’s all.

play09:48

To address a gigabyte – or a billion bytes of memory – we need 32-bit addresses.

play09:53

An important property of this memory is that we can access any memory location, at any

play09:58

time, and in a random order.

play09:59

For this reason, it’s called Random-Access Memory or RAM.

play10:03

When you hear people talking about how much RAM a computer has - that's the computer’s memory.

play10:07

RAM is like a human’s short term or working memory, where you keep track of things going

play10:11

on right now - like whether or not you had lunch or paid your phone bill.

play10:14

Here’s an actual stick of RAM - with 8 memory modules soldered onto the board.

play10:18

If we carefully opened up one of these modules and zoomed in, The first thing you would see

play10:22

are 32 squares of memory.

play10:23

Zoom into one of those squares, and we can see each one is comprised of 4 smaller blocks.

play10:28

If we zoom in again, we get down to the matrix of individual bits.

play10:31

This is a matrix of 128 by 64 bits.

play10:34

That’s 8192 bits in total.

play10:37

Each of our 32 squares has 4 matrices, so that’s 32 thousand, 7 hundred and 68 bits.

play10:43

And there are 32 squares in total.

play10:45

So all in all, that’s roughly 1 million bits of memory in each chip.

play10:49

Our RAM stick has 8 of these chips, so in total, this RAM can store 8 millions bits,

play10:54

otherwise known as 1 megabyte.

play10:56

That’s not a lot of memory these days -- this is a RAM module from the 1980’s.

play11:00

Today you can buy RAM that has a gigabyte or more of memory - that’s billions of bytes

play11:05

of memory.

play11:06

So, today, we built a piece of SRAM - Static Random-Access Memory – which uses latches.

play11:11

There are other types of RAM, such as DRAM, Flash memory, and NVRAM.

play11:15

These are very similar in function to SRAM, but use different circuits to store the individual

play11:19

bits -- for example, using different logic gates, capacitors, charge traps, or memristors.

play11:24

But fundamentally, all of these technologies store bits of information in massively nested

play11:28

matrices of memory cells.

play11:31

Like many things in computing, the fundamental operation is relatively simple.. it’s the

play11:35

layers and layers of abstraction that’s mind blowing -- like a russian doll that

play11:40

keeps getting smaller and smaller and smaller.

play11:42

I’ll see you next week.

play11:44

Credits

Rate This

5.0 / 5 (0 votes)

Related Tags
逻辑门ALU计算机内存随机存取RAM持久性内存AND-OR LatchGated Latch寄存器矩阵多路复用器内存地址
Do you need a summary in English?