Cisco Spine-leaf Network Topology | Cisco CCNA 200-301

Keith Barker - The OG of IT
8 Jan 202225:59

Summary

TLDRIn this informative video, Keith Barker explores the spine-leaf architecture commonly used in data centers. He explains the concept, its benefits, and demonstrates its functionality through a detailed diagram. The video delves into the connectivity between hosts and virtual machines, the role of top-of-rack (ToR) switches, and the spine switches that form the backbone of this architecture. Barker also discusses the use of VXLAN for logical placement of devices in the same Layer 2 network across different physical locations, showcasing how traffic is tunneled and load balanced across the network.

Takeaways

  • 🌟 Spine-leaf architecture is a common setup in data centers, designed to facilitate efficient networking within racks of servers.
  • 🔌 Top of Rack (ToR) switches are the 'leaf' components in spine-leaf architecture, providing connectivity for hosts within a rack.
  • 🔗 Spine switches are the 'spine' components, connecting multiple ToR switches and enabling inter-rack communication.
  • 🔄 The design offers redundancy and fault tolerance, as multiple paths exist for data to travel between racks, reducing the risk of a single point of failure.
  • 🛤️ Multi-pathing is a benefit of spine-leaf architecture, allowing for load balancing and improved throughput across the network.
  • 🌐 Virtual Extensible Local Area Network (VXLAN) is used to extend Layer 2 networks over a Layer 3 network, enabling devices in different physical locations to be part of the same broadcast domain.
  • 📦 VXLAN encapsulates Layer 2 frames within a UDP packet, including a VXLAN header that identifies the virtual network, allowing for logical placement in the same VLAN.
  • 🔑 VNI (VXLAN Network Identifier) is a crucial part of VXLAN, used to differentiate different virtual networks within the same physical network infrastructure.
  • 🔍 Demonstrations in the script show how traffic is tunneled through the spine switches, maintaining the logical appearance of being on the same subnet despite physical separation.
  • 🔄 The script includes a practical demonstration of how traffic is load-balanced across multiple spine switches, showcasing the effectiveness of equal-cost multipath routing.
  • 🔬 The video script serves as an educational resource, explaining complex networking concepts in a way that is accessible for those studying for certifications like Cisco CCNA.

Q & A

  • What is the main topic of the video?

    -The main topic of the video is the spine-leaf architecture, its purpose, and how it functions within a data center environment.

  • What is a spine-leaf architecture?

    -A spine-leaf architecture is a network design that separates the access layer (leaf) from the distribution and core layers (spine), allowing for scalability and efficient connectivity in data centers.

  • Why is spine-leaf architecture commonly used in data centers?

    -Spine-leaf architecture is used in data centers due to its scalability, efficient use of resources, and the ability to provide high-speed connectivity between different hosts and virtual machines.

  • What are the roles of 'spine' and 'leaf' switches in this architecture?

    -In spine-leaf architecture, spine switches provide the high-speed backbone or interconnects, while leaf switches are responsible for the direct connection to end devices such as servers and hosts.

  • How does the video demonstrate the connectivity between different racks in a data center?

    -The video demonstrates connectivity by showing how switches, referred to as spine and leaf switches, are used to connect different racks, ensuring communication between virtual machines and hosts across various racks.

  • What is the purpose of having multiple paths for traffic between racks in the spine-leaf architecture?

    -Multiple paths provide redundancy, fault tolerance, and the ability to use multipathing for increased throughput and load balancing in the network.

  • What is a VLAN and how does it relate to the spine-leaf architecture?

    -A VLAN (Virtual Local Area Network) is a logically separate network within a physical network. In the spine-leaf architecture, VLANs can be extended across different physical racks using VXLAN (Virtual Extensible Local Area Network) technology.

  • What is VXLAN and why is it used in spine-leaf architectures?

    -VXLAN is a network protocol that allows for the extension of VLANs across a network by encapsulating layer 2 frames within layer 3 UDP packets. It is used in spine-leaf architectures to create logical layer 2 segments across different physical locations.

  • How does the video explain the concept of tunneling in the context of VXLAN?

    -The video explains tunneling as the process of encapsulating layer 2 frames within a new packet structure, which is then forwarded over the network. This allows devices in different physical locations to be part of the same logical VLAN.

  • What are the benefits of using VXLAN in a data center environment according to the video?

    -The benefits of using VXLAN in a data center include the ability to logically place devices in the same layer 2 broadcast domain regardless of their physical location, efficient use of IP addressing, and the capability for load balancing across multiple paths.

  • How does the video illustrate the encapsulation and decapsulation process in VXLAN?

    -The video illustrates the process by showing an example of a ping request and its journey through the network. It demonstrates how the original layer 2 frame is encapsulated with a VXLAN header and forwarded over the tunnel, and then decapsulated at the receiving end to be processed within the local VLAN.

Outlines

00:00

🌟 Introduction to Spine Leaf Architecture

Keith Barker introduces the concept of spine-leaf architecture, explaining its purpose and relevance in data centers. He begins by visually representing a data center with racks of gear, each containing hosts that support multiple virtual machines (VMs). Barker illustrates the need for networking between VMs on different hosts and introduces the idea of top-of-rack switches to facilitate this connectivity. He also highlights the importance of avoiding single points of failure by using multiple switches and discusses the concept of layer 2 and layer 3 networking within this architecture.

05:00

🔌 Understanding Spine and Leaf Layers in Data Centers

The video script delves deeper into the spine-leaf architecture, explaining the roles of top-of-rack (ToR) switches, also known as leaf switches, and spine switches. Barker clarifies that ToR switches are part of the leaf layer, while spine switches connect the leaf layer, providing inter-rack connectivity. He demonstrates how adding a new rack involves connecting a new ToR switch to the existing spine switches, ensuring full connectivity without direct connections between leaf switches. The script also touches on the use of VLANs and the challenges of extending them across layer 3 networks.

10:01

📡 VXLAN: Bridging Layer 2 Networks Across Layer 3 Boundaries

Barker introduces Virtual Extensible Local Area Networks (VXLANs) as a solution to extend VLANs across a layer 3 network. He explains the concept of VXLAN identifiers (VNIs or VN IDs) and demonstrates how devices in different physical racks can be part of the same logical VLAN through VXLAN tunnels. The script describes the encapsulation process within VXLAN, where layer 2 frames are re-encapsulated with a VXLAN header for transport over layer 3 networks. Barker also discusses the configuration of VXLAN on leaf switches and the use of tunnel endpoints (VTEPs) for tunneling traffic.

15:03

💻 Practical Demonstration of VXLAN Tunneling in Action

The script transitions into a practical demonstration of VXLAN tunneling, showing the IP addressing scheme for tunnel endpoints and the routing table for efficient path selection. Barker explains the configuration of VLANs and VXLAN segments on leaf switches and how traffic is encapsulated and forwarded within a VXLAN network. He performs packet captures to visually demonstrate the encapsulation and de-encapsulation process during a ping and SSH session between servers in the same VXLAN but different physical locations.

20:05

🔄 Exploring Load Balancing and Traffic Distribution with VXLAN

Barker investigates the load balancing capabilities of VXLAN by examining the traffic distribution across multiple layer 3 paths provided by the spine switches. He conducts tests to validate the equal cost multipathing and shows how traffic is balanced between different interfaces on the leaf switches. The script provides a step-by-step guide on monitoring interface output rates to observe the load balancing in action, demonstrating the flexibility and efficiency of VXLAN in handling network traffic.

25:06

🏭 Concluding Thoughts on Network Topologies and VXLAN in Data Centers

In conclusion, Barker recaps the spine-leaf model and its advantages for data center networks, emphasizing the quick communication between devices on different hosts. He contrasts the hierarchical model suitable for campus networks with the spine-leaf model preferred for data centers. Barker also highlights the benefits of leveraging VXLAN to create logical layer 2 networks across a layer 3 infrastructure, wrapping up the video with a reminder of the importance of these concepts for those studying for the Cisco CCNA certification.

Mindmap

Keywords

💡Spine Leaf Architecture

Spine Leaf Architecture is a network design model commonly used in data centers. It consists of two layers: the spine layer, which is a high-capacity layer that interconnects all the switches, and the leaf layer, which is responsible for the end hosts' connectivity. In the video, this architecture is discussed in the context of data center networking, where it provides scalability and redundancy for efficient communication between servers and virtual machines.

💡Data Center

A data center is a facility that houses a large number of servers, storage systems, and network devices. It is critical for the operation of many IT and internet services. In the video, the data center serves as the primary setting for the discussion of network architectures and the deployment of hosts and switches.

💡Hosts

In the context of the video, hosts are servers or computing devices that are housed within the racks of a data center. They are the physical hardware that supports the operation of various applications and services. The script describes how hosts are connected to switches and how they support multiple virtual machines.

💡Virtual Machines (VMs)

Virtual Machines are software-based emulations of physical computers. They allow for the virtualization of computing resources and are commonly used in data centers to maximize efficiency. In the video, VMs are mentioned as being hosted on physical servers, which communicate with each other over the network.

💡Networking

Networking in the video refers to the interconnection of various devices within a data center to enable communication. It involves both the physical layer of connectivity and the logical layer of data exchange. The script discusses how networking is essential for VMs on different hosts to communicate.

💡Layer 2 and Layer 3 Switching

Layer 2 switching refers to the data forwarding based on MAC addresses within a local network, while Layer 3 switching involves IP address-based routing between different networks. In the video, the presenter explains how spine switches can perform both Layer 2 and Layer 3 functions, enabling connectivity and routing within the data center network.

💡Fault Tolerance

Fault tolerance is the ability of a system to continue operating or to recover from a failure without causing unacceptable disruption. In the context of the video, fault tolerance is achieved by having multiple paths and switches in the network, ensuring that if one component fails, the network remains operational.

💡VXLAN (Virtual Extensible Local Area Network)

VXLAN is a network protocol that allows for the creation of virtual networks over an IP network. It enables the extension of Layer 2 networks across Layer 3 networks without the need for all switches to be directly connected at Layer 2. In the video, VXLAN is discussed as a solution for creating logical Layer 2 networks across a spine-leaf architecture.

💡VTEP (VXLAN Tunnel Endpoint)

VTEP is a term used in VXLAN to refer to the endpoints of a VXLAN tunnel. It is responsible for encapsulating and de-encapsulating traffic as it enters and exits the VXLAN segment. The script describes how VTEPs are used to create logical connections between switches in a spine-leaf architecture.

💡Multi-Pathing

Multi-pathing is a networking technique that allows data to be sent over multiple paths for redundancy and load balancing. In the video, multi-pathing is mentioned as a benefit of the spine-leaf architecture, where traffic can be routed through different spine switches to reach its destination.

💡Equal Cost Multi-Pathing (ECMP)

Equal Cost Multi-Pathing is a routing technique that balances traffic across multiple equal-cost paths to increase throughput and redundancy. In the video, ECMP is discussed as a feature that allows for efficient load balancing across the spine switches in the network.

Highlights

Introduction to spine-leaf architecture in data centers and its purpose.

Explanation of the typical setup of racks and hosts in a data center environment.

Description of virtual machines and their networking within and across physical hosts.

Illustration of the need for networking between virtual machines on different hosts.

Introduction of top-of-rack (ToR) switches and their role in providing connectivity.

Discussion on the importance of fault tolerance in switch deployment.

Explanation of multi-layer switching capabilities for both Layer 2 and Layer 3 forwarding.

Benefits of spine-leaf architecture, such as equal cost paths and fault tolerance.

Introduction to spine switches and their connectivity to ToR switches.

Concept of extending VLANs over a network using VXLAN (Virtual Extensible Local Area Network).

Mechanism of VXLAN tunneling to logically place devices in the same broadcast domain.

Demonstration of VXLAN traffic capture and analysis.

Explanation of how VXLAN encapsulation and de-encapsulation work in a data center.

Practical demonstration of load balancing across multiple spine switches.

Discussion on the impact of spine-leaf architecture on network design and topologies.

Comparison between hierarchical and spine-leaf models for different network environments.

Conclusion summarizing the importance of understanding spine-leaf architecture and VXLAN for network professionals.

Transcripts

play00:00

[Music]

play00:07

hello and welcome my name is keith

play00:10

barker and in this video you and i get

play00:11

to take a look at the spine leaf

play00:13

architecture to identify first of all

play00:15

what is it why do we need it or want it

play00:17

and then third to take a look at it in

play00:19

action and a spine leaf architecture is

play00:21

very typical inside of a data center so

play00:23

let's go ahead and draw some racks of

play00:24

gear at a typical data center so let's

play00:27

label these as rack number three and

play00:29

rack number four and rack number five

play00:31

and rack number six and let's imagine

play00:33

inside those racks they've got some

play00:35

amazing hosts let's go ahead and put

play00:37

four of them in each of these racks

play00:39

and then we'll go ahead and label those

play00:41

as host one for the individual hosts

play00:43

that are in this rack of gear and host

play00:45

two and host three and host four and the

play00:49

same for the other three racks here in

play00:51

our little mock data center

play00:53

and let's also imagine that each of

play00:54

these servers these hosts are supporting

play00:56

multiple virtual machines maybe a dozen

play00:58

or more virtual machines on each host so

play01:01

let's measure that vm 11 is running on

play01:05

host number one

play01:07

and another vm let's call it vm

play01:09

12 is running on host number two inside

play01:13

of this one physical rack rack number

play01:15

three now if these two vms were running

play01:17

inside the same host they could have

play01:19

logical virtualized networking to allow

play01:21

them to connect to each other but if

play01:23

this vm11 is on host one and this vm is

play01:26

on host number two we need to have some

play01:28

networking between them so on each of

play01:29

the racks we could add one or more

play01:31

switches that allows that connectivity

play01:33

and so i'll add one of those switches to

play01:34

the top of each of these four racks and

play01:37

let me call this one switch 3.

play01:39

i'll call this one switch four

play01:41

and this one switch five

play01:43

and this one's switch six and then each

play01:45

of these physical servers would have

play01:46

connectivity up to that switch very

play01:48

likely more than just once for some

play01:50

fault tolerance and some additional

play01:51

throughput so i'll put some cables there

play01:53

it's also very likely we'd have a couple

play01:55

of these switches at the top and that

play01:57

way if a single switch failed we

play01:58

wouldn't have a single point of failure

play02:00

so we draw the rest of the connectivity

play02:01

in here

play02:02

let's also imagine here over in rack 6

play02:05

that we have vm

play02:07

that's running on host 3 and vm 34

play02:11

that's running on host 4. and very

play02:13

similar to what was happening over here

play02:14

in rack 3 here on rack 6 if vm33 needs

play02:17

to communicate with this vm 34 because

play02:20

they're on different hosts it would use

play02:21

the networking provided between those

play02:23

two hosts to facilitate that and we're

play02:25

also gonna have some virtualized

play02:26

networking on each of the hosts so here

play02:28

on host three if vm 33 and let's say vm

play02:31

23 we're on the same exact host those

play02:33

two vms using logical networking in that

play02:36

host could communicate with each other

play02:37

but anytime we have to go between two

play02:39

physical servers

play02:40

we're going to need some networking that

play02:42

provides connectivity between those

play02:44

servers so are you ready for our next

play02:45

challenge here it is what if this vm

play02:47

bm11 which is currently being hosted by

play02:50

host number one here on rack three what

play02:53

if it needs to communicate with vm33

play02:55

over here which is running on host three

play02:58

on rack six

play02:59

once again we're gonna need some

play03:01

physical connectivity in this example we

play03:03

need some connectivity networking

play03:04

connectivity from the servers and the

play03:06

hosts here in rack 3 to the hosts over

play03:09

here in rack 6. so let's add some

play03:12

switches that can allow that

play03:13

connectivity let's go ahead and put a

play03:15

couple in here and let's call this

play03:16

switch 1 and switch 2. and when i'm

play03:19

using the term switch here i also want

play03:21

to imply multi-layer switching there are

play03:23

devices that can do layer 3 forwarding

play03:26

based on ip addresses and layer 2

play03:28

forwarding based on mac addresses

play03:30

depending on how they've been deployed

play03:31

so for the connectivity from switch 3 to

play03:34

these higher layer switches let's go

play03:36

ahead and use a red color and let's

play03:38

represent that red is connections that

play03:40

are using layer 3. so it'd be an ip

play03:42

network associated with this segment

play03:44

between switch three and switch one and

play03:46

for fault tolerance we'd wanna go from

play03:47

switch three to this guy two maybe yet

play03:49

another layer three subnet and then from

play03:51

switch four to switch one and switch

play03:53

four to switch two and then five to one

play03:56

and five to two and then

play03:58

six i'll go over the top here six to one

play04:01

and six to two and for these networks we

play04:03

could use a slash 30 or slash 31.

play04:06

sometimes that's allowed so it's not

play04:08

going to tie up a big block of ip

play04:09

addressing but i wanted to represent the

play04:11

connectivity between these upper level

play04:14

switches and these top of rack switches

play04:16

that's going to be layer 3 connectivity

play04:18

so here's the great news if host 1 needs

play04:20

to forward traffic on behalf of his vm

play04:22

11 over to vm33 that traffic from host 1

play04:26

would go through this top of rack switch

play04:28

and then it has two paths it could go

play04:29

through switch one as a routing decision

play04:32

and then with switch one's connectivity

play04:33

over to rack six that traffic could be

play04:35

forwarded down to host number three so

play04:37

that'd be one path i'll go ahead and

play04:38

draw that here there'll be one path and

play04:40

another path would be going through

play04:42

switch two and that would look like this

play04:44

so as a result of this design here are

play04:46

some of the benefits we have two equal

play04:48

cost paths to get from this rack here

play04:51

rack number three over direct number six

play04:53

so we have equal class paths we could

play04:55

use multi-pathing also we have some

play04:57

fault tolerance if we lose one of these

play04:58

switches at the top here switch one or

play05:00

switch two the other one still is

play05:02

providing connectivity and we still have

play05:03

communications between these two racks

play05:05

so let's go ahead and remove that path

play05:07

for a moment and let's talk about some

play05:09

interesting names they have for these

play05:11

switches

play05:12

for these switches that are placed at

play05:14

the top of the rack and this is for

play05:15

convenience they don't have to be at the

play05:16

very top of the rack but they're often

play05:18

referred to as t

play05:19

o r or top of rack switches also this

play05:22

layer right here these topper rack

play05:24

switches are also making up what's known

play05:26

as the leaf layer so if you take a look

play05:29

at the spine and leaf architecture or

play05:30

spine leaf design these top of rack

play05:32

switches are literally the leaf portion

play05:35

of our spine leaf design and for these

play05:37

switches up here i'll go ahead and put a

play05:39

little dividing line for these switches

play05:40

up here these are considered to be the

play05:42

spine switches that are providing

play05:44

connectivity between the top of rack

play05:46

switches so when somebody talks about a

play05:48

spine leaf design or a spine leaf

play05:50

architecture they're talking about

play05:52

exactly this

play05:53

and if we were to add another rack let's

play05:55

go ahead and add another rack we'll call

play05:56

this rack number seven this new rack

play05:58

would also have a top of rack switch

play06:00

we'll call it switch number seven and it

play06:02

would have connectivity up to the spine

play06:03

switches so in our case i just have two

play06:05

spine switches here but if we had 10

play06:07

spine switches we would have 10

play06:09

connections from this top of rack switch

play06:11

one for the connection to each of our

play06:13

spines another thing to note is that the

play06:15

spine switches themselves they don't

play06:17

have cross connects between them going

play06:19

horizontally also the top of rack

play06:21

switches also do not have connectivity

play06:24

between them however each spine switch

play06:26

has a connection to each and every leaf

play06:29

switch

play06:30

and every leaf switch has a connection

play06:32

to each and every spine switch and what

play06:34

this facilitates is connectivity between

play06:36

the hosts and any of these racks and the

play06:39

host in any other racks on behalf of the

play06:41

vms that may need to communicate with

play06:42

each other so vm11 here can communicate

play06:45

with vm33 maybe this guy's on the 10.3

play06:49

subnet and over here this vm is on the

play06:51

10.6 subnet and they have a layer 3

play06:54

routed path they can use to reach each

play06:57

other however what if we wanted to put

play07:00

vm11 here which is over here in rack

play07:02

three and vm 33 over here on rack six

play07:05

what if we wanted to logically place

play07:07

them in the same layer two network the

play07:10

same broadcast main the same vlan well

play07:12

one of our challenges we don't have

play07:14

trunks up here that are connecting all

play07:15

the switches together but rather what we

play07:17

have is layer 3 connectivity so how do

play07:20

you extend a vlan over a network if you

play07:23

have routers in the way that are doing

play07:25

layer 3 forwarding and the answer is

play07:27

using v

play07:28

x lands

play07:30

and that's what i'd like to talk with

play07:31

you about right now that can leverage

play07:33

this kind of architecture with a spine

play07:35

leaf design and allow devices over here

play07:38

in one rack of gear to be the same

play07:40

logical vlan as a vm that's in a

play07:43

completely separate rack even though

play07:45

they are not directly connected at layer

play07:47

two so this represents the leaf layer of

play07:49

our switches our top of rack switches

play07:51

and this represents our spine layer

play07:54

right here

play07:55

unless we have full connectivity between

play07:57

each spine device and all of the leafs

play08:00

and each leaf has full connectivity to

play08:02

all the spine switches so we can imagine

play08:04

that each of these leaf switches

play08:05

represents a separate rack and i've also

play08:07

placed some devices on our topology that

play08:09

we can play with and look at the results

play08:11

and as a reminder all the connectivity

play08:13

between the spine and the leaf it is all

play08:15

layer three so there's no native

play08:17

extensions of a vlan for example from

play08:19

over here in rack three over here to

play08:21

rock six but what we are gonna do

play08:22

instead is we are gonna use v

play08:24

x lan and that's an acronym for virtual

play08:28

extensible

play08:29

local area network so they took the

play08:31

second character there for the x to make

play08:33

the acronym vxlan effectively we can

play08:36

choose to put devices even in separate

play08:38

physical racks even though they're not

play08:40

connected directly at layer 2 we can

play08:42

logically place them in the same vlans

play08:44

by using this concept called vxlan and

play08:47

here's how we're going to pull it off

play08:48

for these vxlans we're going to create

play08:49

some identifiers for the vxlans these

play08:52

are commonly referred to as v and i's

play08:55

i've also seen them as vn ids so let's

play08:57

imagine we want to create an identifier

play08:59

of six seven eight three which happens

play09:02

to be my cci number and let's imagine

play09:04

that we want these two devices here on

play09:06

this left hand side of our network we

play09:08

want them to be a part of that vxlan of

play09:12

6783 and we want these two devices over

play09:15

here on the right hand side of our

play09:16

topology also to be in that same vxlan

play09:18

of 6783

play09:21

so as far as the actual subnet address

play09:22

that we're going to use with that let's

play09:24

use a 10.9.0.0

play09:27

with a 24-bit mask and let's imagine

play09:29

that pc6 is at dot 6 and the server here

play09:32

is at 106

play09:34

and over here on the left hand side this

play09:36

guy is at dot three and the server is at

play09:38

dot one o three all of it in the ten

play09:41

nine zero address space now first glance

play09:43

you might think well how in the hell in

play09:45

the world is that gonna work i've got

play09:46

these devices on the 1090 network over

play09:48

here i've got these other devices on the

play09:50

1090 network over here how do i get the

play09:52

traffic across these racks in the same

play09:55

vlan and the secret and the answer to

play09:56

that question is by doing tunneling

play09:59

we're going to set up tunnels now these

play10:00

tunnels can be manually set up and

play10:02

statically configured also they can be

play10:04

dynamically discovered there's lots of

play10:05

different ways of doing that but at the

play10:07

end of the day we're going to set up

play10:08

tunneling between our leaf devices so

play10:10

i'm going to put my tunnel in this color

play10:12

right here i'm going to put a tunnel in

play10:13

from leaf 3 over to leaf6 and i'll kind

play10:17

of fill it in here

play10:18

and one interesting thing about a tunnel

play10:20

is that it has two end points

play10:22

we're gonna have one end point over here

play10:23

on leaf three and another end point over

play10:25

here at least six and these end points

play10:27

are referred to with vxlan as

play10:30

vtep which is a vlan tunnel endpoint so

play10:34

with an ipsec tunnel we're taking the

play10:35

original payload encrypting it and then

play10:38

we're putting it inside of a whole brand

play10:39

new packet and then it shipped across

play10:41

the network to the other peer who then

play10:43

decrypts it and a very similar concept

play10:45

works like that with vxlan except

play10:46

instead of encapsulating for the benefit

play10:48

of encryption we're encapsulating for

play10:50

the benefit of forwarding traffic to

play10:52

make these devices in this vxlan believe

play10:56

they're on the same subnet and it's

play10:57

accomplished by taking the original

play10:59

payload from layer two up then

play11:02

re-encapsulating that inside of a packet

play11:04

which we're gonna forward over the

play11:05

tunnel and i think a fantastic example

play11:07

would be this let's imagine that this pc

play11:10

right here does a ping to the address

play11:12

over here on dot three which is

play11:16

10.9.0.3

play11:17

so that initial arp request is a

play11:19

broadcast so originally the broadcast is

play11:22

an arp request and the source mac

play11:24

address would be pc6 mac address and the

play11:27

destination would be the broadcast

play11:28

address at layer two and here's what

play11:31

leave six will do it'll take that

play11:32

original request like this including the

play11:34

layer two header and it's going to

play11:36

re-encapsulate it into a whole new

play11:39

datagram that looks like this so this is

play11:41

our payload it's going to have a vxlan

play11:43

header that's going to identify the

play11:45

vxlan it belongs to 6783 in this case

play11:48

and then at layer 4 it's going to be

play11:50

using udp and then layer 3 is going to

play11:52

have the source address of switch 6

play11:55

whatever the v tip is for switch six and

play11:57

for the destination ip address it's

play11:58

gonna be the vtep or the logical end of

play12:00

the tunnel that switch three is

play12:02

supporting so the destination is gonna

play12:03

be

play12:04

switch

play12:05

three and then the layer two headers are

play12:07

gonna be swapped out as that packet is

play12:09

forwarded across the network so the

play12:10

actual path of the traffic would be

play12:12

going from this leaf to either switch

play12:15

two or switch one who would then forward

play12:17

it down to leave three who would then

play12:19

take a look at the header and say oh my

play12:20

goodness i see what this is d

play12:21

encapsulate it and then simply forward

play12:23

the original layer two frame down to

play12:26

this vlan down here at which point pc3

play12:28

would see it say oh there's a request

play12:30

that darp request from pc6 it would

play12:32

respond and then that traffic would once

play12:34

again be re-encapsulated at least three

play12:36

shipped logically over the tunnel where

play12:38

leave six would de-encapsulate that and

play12:40

then for the response down to pc6 so the

play12:43

actual literal traffic is being routed

play12:45

over the network however the logical

play12:47

path is through the tunnel between the

play12:49

two leaf switches so the benefit of

play12:51

vxlan in the data center is that we can

play12:53

logically place devices in the same

play12:56

layer 2 broadcast domain even though

play12:58

those devices may be separated by one or

play13:01

more layer 3 routers on the path to get

play13:03

there again it's all done through the

play13:04

tunneling mechanisms that vxlan uses so

play13:07

i thought to myself wow it'd be really

play13:09

cool if we could like you know take a

play13:10

look at the packets involved when vxlan

play13:13

is in use and i just so happen to have a

play13:15

topology it's the one we're looking at

play13:17

that i'd like to go ahead and demo for

play13:19

you right now and here's the lay of the

play13:21

land regarding ip addressing for the

play13:22

tunnel endpoints here on lead three i'm

play13:24

going to be using 10.10.10.3

play13:27

as the tunnel endpoint here only three

play13:30

and the other end of that tunnel for

play13:31

this demonstration on leaf six is going

play13:32

to be 10.10.10.6.

play13:37

so if we looked at the routing table

play13:38

from leaf6's perspective regarding how

play13:39

to reach the other end of the tunnel

play13:41

it's going to have two paths one that

play13:43

goes this way one goes that way and as a

play13:45

result of it having multiple paths it

play13:47

can use equal cost multipathing which is

play13:49

fantastic and that way we can send some

play13:51

traffic this way and some traffic that

play13:53

way and get more throughput as we're

play13:54

forwarding traffic over the tunnel on

play13:56

behalf of our hosts and vms so here from

play13:59

the perspective of leaf six if we do a

play14:01

show nve and a question mark and let's

play14:04

go ahead and take a look at piers

play14:07

so here showing that we have up here at

play14:08

10 10 10 3 that's the leaf 3 switch

play14:12

and if we do a show nve peers and detail

play14:15

here it shows us that the peer state is

play14:17

up it's also specifying that the virtual

play14:19

network identifier that it's supporting

play14:21

is 6783 and that is the same virtual

play14:24

network identifier that we're using so

play14:26

we did a show iprout for that specific

play14:28

route of 10.10.10.3

play14:31

this will help us confirm that we have

play14:32

multiple paths to get there one going

play14:34

out ethernet one slash one the other

play14:36

going out ethernet one slash two also

play14:38

let's do a show run and let me share

play14:39

with you a few tidbits from this

play14:41

configuration so currently i have vlan 9

play14:44

that's the local vlan on this leaf

play14:46

switch and i've associated with it the

play14:48

vn segment the virtual network segment

play14:50

of that vxlan id of 6783 so the play by

play14:54

play is if we have the switch we have a

play14:55

device connected to let's say port 1 7

play14:58

which we do right here if we assign that

play15:00

port as an access port in vlan 9 based

play15:03

on this configuration that client is

play15:05

also logically going to be in the vxlan

play15:08

6783 so if the client sends in a

play15:10

broadcast like an arp request this

play15:12

switch will take that re-encapsulate it

play15:15

forward it over to the pier who will

play15:16

de-encapsulate it and then forward it

play15:18

down to the other devices that it has

play15:20

locally which are also associated with

play15:22

the vxlan of 6783 also just to confirm

play15:25

who to show interface status

play15:28

and let me scooch this over just a

play15:29

little bit here showing us that ethernet

play15:31

1 and 2 these interfaces here are layer

play15:34

3 connections up to the spine layer also

play15:36

part 6 and 7 which i have configured

play15:38

right here

play15:39

they are configured as access ports in

play15:41

vlan9 which based on our configuration

play15:44

is also associated with a virtual

play15:46

network id of 6783 and switch 3 has

play15:49

similar treatment for ports 6 and 7 over

play15:51

here on its side as well so here's what

play15:53

i like to do before we send traffic from

play15:55

pc6 or server 6 over here to server 3 or

play15:57

pc3 let's do some captures on e1 slash 1

play16:01

and 1 2 so we can actually see the vxlan

play16:04

traffic and the re-encapsulation that's

play16:06

done as part of the tunneling

play16:08

so i'll go ahead and start those

play16:09

captures so we'll capture one slash one

play16:12

and click ok we should see some ospf

play16:15

hello messages and other stuff that's

play16:16

going on great great great it's working

play16:18

all right let me go ahead and make that

play16:19

a little bit smaller for the moment and

play16:21

let's also capture on e one slash two so

play16:24

go ahead and right click again click on

play16:25

capture we'll specify e one slash two

play16:28

and we'll click on okay all right make

play16:30

that also a little bit smaller

play16:32

and move it over here a little bit to

play16:34

the right just to make sure there's some

play16:35

activity on it good good good so we can

play16:37

bring out the pc or the server let's go

play16:38

and bring up the server

play16:40

all right so this represents the server

play16:42

we'll just double check its idp address

play16:44

real quick by bringing up a command line

play16:46

and let's do ifconfig for ethernet 0 and

play16:50

sure enough

play16:51

10.9.0.106. this is hanging off of

play16:53

switch 6.

play16:54

so if we are going to do a ping over to

play16:57

let's go ahead and ping server 3 on the

play16:59

left and he is at 10.9.0.103

play17:04

and press enter

play17:07

i'm always a little shocked when it

play17:09

works so well the first time let's do a

play17:10

ctrl c there so what's happening behind

play17:12

the scenes is that this switch right

play17:14

here switch 6 is taking those requests

play17:18

wrapping them up shipping them over the

play17:19

tunnel and then switch three is

play17:21

de-encapsulating them forwarding them

play17:22

down and then for the replies back the

play17:25

reverse process happens also while the

play17:27

captures are still running let's do a

play17:28

couple more things let's go ahead and do

play17:30

an ssh session let's do an ssh over to

play17:33

10.9.0.103

play17:38

and press enter okay it's asking me if i

play17:40

don't trust the fingerprint i'll say yes

play17:42

and what is the password over there is

play17:45

it this let's see here um is it that

play17:48

nope is it that

play17:50

nope okay really doesn't matter too much

play17:51

i just want to make sure we can capture

play17:52

some traffic going back and forth

play17:54

between this server here on the right

play17:57

and this server over here in the same

play17:58

vxlan but yet in a separate part of the

play18:01

network so let's go ahead and stop our

play18:02

captures and we'll take a look all right

play18:05

we could probably grab either one and

play18:06

get some traffic let's go ahead and

play18:07

start with e 1 2 that interface and take

play18:10

a look and let me make the font a little

play18:12

bit bigger so let's start with a ping

play18:15

and let's go ahead and do a filter

play18:16

display filter looking for just icmp

play18:18

traffic and here we go right here so

play18:20

here we have traffic from 109.0.106

play18:23

that was the server on the right pinging

play18:25

10.9.0.103

play18:27

that's the server on the left but check

play18:29

it out because we captured the traffic

play18:30

as it was going over the logical tunnel

play18:32

look what it did so let's take a look at

play18:34

what happened here so the ethernet frame

play18:36

is being sent and this would be coming

play18:38

from

play18:39

the mac address associated with the

play18:41

ethernet one slash two interface on

play18:44

switch six going to the layer two

play18:45

address of the next top router so that's

play18:47

the outmost ethernet header and inside

play18:50

of the ethernet header is saying the

play18:51

next protocol is hexadecimal 800 which

play18:54

is ipv4 so here in the ipv4 header the

play18:57

source is the tunnel endpoint address

play19:00

for switch 6 and the destination is the

play19:03

tunnel endpoint address for switch 3.

play19:05

and then the header it says hey the next

play19:07

protocol is udp and so here's the udp

play19:10

header that was being used for the

play19:12

re-encapsulation of this tunnel traffic

play19:14

and then after the udp header then we

play19:16

have a vxlan header right here and here

play19:19

it's identifying the vxlan network

play19:20

identifier the 6783 and that way when

play19:23

the receiving switch sees it says okay

play19:25

great it can de-encapsulate it and then

play19:27

forward it appropriately to the devices

play19:29

in the vlans associated with that vxlan

play19:32

so once switch 3 receives this it's

play19:34

going to strip off all this information

play19:36

and what's going to forward is the

play19:37

original layer 2 header with the source

play19:39

mac address of server 6 on the right

play19:42

hand side and the destination mac

play19:44

address of server 3 on the left hand

play19:46

side and then in this ethernet header it

play19:48

then points to the next protocol being

play19:50

ipv4 and then the payload for this

play19:52

packet was an icmp echo request which is

play19:55

right here so from the server's

play19:56

perspective they don't know that all

play19:58

this encapsulation de-encapsulation

play20:00

happened all they think is that hey

play20:02

we're two devices on the same subnet on

play20:04

the 10.90 subnet and it feels like to

play20:07

these devices it feels like we're right

play20:09

next to each other so here's saying the

play20:10

response wasn't found but it's very

play20:12

possible that the response could have

play20:13

come back on the other path because

play20:15

there's two layer three paths provided

play20:16

by the spine so the response could have

play20:18

come back over the other interface all

play20:20

right so let's go take a look at ssh we

play20:23

also did an ssh session or we started

play20:25

one let's go ahead and do a display

play20:26

filter for that and i'm going to go

play20:28

ahead and right click and go to follow

play20:30

and say let's follow the tcp stream and

play20:32

that's going to close that and give us

play20:34

just a filter for that session so if we

play20:36

pick one of these as being initiated by

play20:37

the server at 106. let's go ahead and

play20:39

grab this right here everything in these

play20:40

first one two three four five entries

play20:44

here are based on the re-encapsulation

play20:46

and setting the traffic through the

play20:47

tunnel so all this is for the benefit of

play20:49

the two endpoints of the tunnel and then

play20:51

when the receiver gets it switch three

play20:53

it would de-encapsulate it and it would

play20:55

simply forward the layer two header

play20:57

which from the mac address it looks like

play20:59

it's sourced from the mac address

play21:01

associated with server six going to the

play21:04

mac address associated with server three

play21:07

and then the next protocol is ip and

play21:08

then layer four is tcp and then we have

play21:11

the payload of the original request

play21:12

which was an ssh message so is it

play21:14

possible to have a tcp header and a udp

play21:16

header the answer is yes here we have

play21:19

udp header that was part of the

play21:21

encapsulation and the tunnel traffic and

play21:22

then after that got stripped off the

play21:24

actual real protocol trying to be used

play21:26

by the client was using a layer 4 tcp

play21:29

in association with ssh so let me go

play21:31

ahead and close those captures and one

play21:33

last little test i'd like to do is just

play21:34

to validate that we're actually doing

play21:36

load balancing across the two interfaces

play21:39

as traffic goes from server six over to

play21:41

server three so here on leaf six we do a

play21:44

show iprout to the other end of the

play21:46

tunnel which is 10.10.10.3

play21:49

we have two equal cost paths and let's

play21:51

get a little bit creative and let's do a

play21:53

show

play21:54

interface

play21:55

ethernet one slash one and let's pick

play21:58

out a few elements that might be

play21:59

relevant here so we can filter them out

play22:01

with a pipe and how about this let's

play22:02

let's go ahead and filter on the 30

play22:04

second output rate right there so i'm

play22:06

going to copy that to my buffer

play22:08

and let's do a show for that interface

play22:11

we'll do a pipe and we'll put in some

play22:13

quotes paste that in and end quote and

play22:16

enter

play22:18

and it says keith you might want to put

play22:19

it include there

play22:22

i thought it didn't that count all right

play22:24

sounds good and then we'll do it again

play22:26

also for

play22:27

ethernet one slash two so to be a little

play22:29

creative let's do this let's do a show

play22:31

interface ethernet one slash two

play22:33

looking for just the output that

play22:34

includes that line press enter and we'll

play22:37

also do it for one one now we can go

play22:38

back and forth we can actually see the

play22:40

amount of traffic over the last 30

play22:41

seconds that's currently being used so

play22:43

currently it's about 80 bits per second

play22:45

not very taxing so if we bring up our

play22:47

server let's do this let's do a ping and

play22:49

we'll say the size is 1000 bytes

play22:52

and we'll ping the target at

play22:54

10.9.0.103 that should put some load and

play22:57

they'll just be a continuous ping and

play22:58

that way we can look at the output here

play23:00

and see approximately if it's doing any

play23:02

load balancing so i'll go ahead and just

play23:04

let that run the background and let's go

play23:06

ahead and give that a moment and let's

play23:08

look at ethernet one slash one and now

play23:10

it's at two thousand bits per second and

play23:12

one slash two is about eleven sixty

play23:15

eight so we'll go ahead and give it a

play23:16

few more moments and then we'll test it

play23:18

again so both are being used it's not

play23:20

exactly perfect but if we had more

play23:22

clients it's very likely to be more

play23:24

equally spread out there we go that's 3

play23:26

100 and 3 300

play23:28

amazing and we'll go ahead and do it

play23:29

again

play23:30

and now it's 47 and 35.

play23:33

there's 5200 and 5500 so i also want to

play23:36

verify just by stopping my ping i'm

play23:38

going to do a control c to stop the

play23:40

traffic and then then we'll give it

play23:41

about 15 20 seconds and then we'll look

play23:43

at it again and those numbers should be

play23:45

dropping so it's been about 30 seconds

play23:47

and so i'm going to go ahead and look at

play23:49

both of them again so we have 480 bits

play23:51

now per second on one slash one and on

play23:54

one slash two we have about 96. so let's

play23:57

go ahead and just verify that real quick

play23:58

so there we go they're both coming down

play24:01

so let's recap we have up here we have

play24:04

switches that are at the spine layer and

play24:06

there's full connectivity between each

play24:08

switch at the spine layer and each

play24:09

device at the leaf layer right here also

play24:12

there's no cross connects so we don't

play24:14

have cross connections between a switch

play24:16

1 and switch 2 at spine layer no no no

play24:18

we also don't have cross connects here

play24:21

horizontally between any of the leaf

play24:23

switches everything's going through the

play24:25

spine we also took a look at the concept

play24:27

of tunneling and using vxlans which

play24:30

gives us the ability to take any devices

play24:32

we want to pretty much anywhere here in

play24:33

the data center and logically place them

play24:36

on the same subnet and that's done by

play24:38

tunneling the traffic between the

play24:39

switches so the actual packets are being

play24:42

routed over the spine but the logical

play24:44

tunnel is going from one switch over to

play24:46

the other switch so when considering

play24:48

network designs and network topologies

play24:50

in light of the cisco ccna i'd like you

play24:53

to remember two major families one is

play24:55

the hierarchical model which is the

play24:56

three layer model we had a separate

play24:58

video just on that and that's with the

play24:59

axis layer distribution layer and the

play25:02

core or if you want to smash the

play25:04

distribution and core together as a

play25:05

collapse core that would be the two tier

play25:08

hierarchical model and that's really

play25:09

great for like a campus network if we

play25:11

however have a data center that we need

play25:13

devices to be able to communicate very

play25:15

quickly back and forth even if they're

play25:16

on different hosts we'd very likely be

play25:18

using a spine leaf model as we've just

play25:21

discussed in this video and if you want

play25:22

to on top of that leveraging vxlans as

play25:25

well so thanks for joining me in this

play25:26

video and i'll see you my friend in the

play25:28

next live event or next video until then

play25:30

be happy and treat everybody well i'll

play25:32

see you next time

play25:34

[Music]

play25:54

all your hopes are never

Rate This

5.0 / 5 (0 votes)

هل تحتاج إلى تلخيص باللغة الإنجليزية؟