Cisco Spine-leaf Network Topology | Cisco CCNA 200-301
Summary
TLDRIn this informative video, Keith Barker explores the spine-leaf architecture commonly used in data centers. He explains the concept, its benefits, and demonstrates its functionality through a detailed diagram. The video delves into the connectivity between hosts and virtual machines, the role of top-of-rack (ToR) switches, and the spine switches that form the backbone of this architecture. Barker also discusses the use of VXLAN for logical placement of devices in the same Layer 2 network across different physical locations, showcasing how traffic is tunneled and load balanced across the network.
Takeaways
- 🌟 Spine-leaf architecture is a common setup in data centers, designed to facilitate efficient networking within racks of servers.
- 🔌 Top of Rack (ToR) switches are the 'leaf' components in spine-leaf architecture, providing connectivity for hosts within a rack.
- 🔗 Spine switches are the 'spine' components, connecting multiple ToR switches and enabling inter-rack communication.
- 🔄 The design offers redundancy and fault tolerance, as multiple paths exist for data to travel between racks, reducing the risk of a single point of failure.
- 🛤️ Multi-pathing is a benefit of spine-leaf architecture, allowing for load balancing and improved throughput across the network.
- 🌐 Virtual Extensible Local Area Network (VXLAN) is used to extend Layer 2 networks over a Layer 3 network, enabling devices in different physical locations to be part of the same broadcast domain.
- 📦 VXLAN encapsulates Layer 2 frames within a UDP packet, including a VXLAN header that identifies the virtual network, allowing for logical placement in the same VLAN.
- 🔑 VNI (VXLAN Network Identifier) is a crucial part of VXLAN, used to differentiate different virtual networks within the same physical network infrastructure.
- 🔍 Demonstrations in the script show how traffic is tunneled through the spine switches, maintaining the logical appearance of being on the same subnet despite physical separation.
- 🔄 The script includes a practical demonstration of how traffic is load-balanced across multiple spine switches, showcasing the effectiveness of equal-cost multipath routing.
- 🔬 The video script serves as an educational resource, explaining complex networking concepts in a way that is accessible for those studying for certifications like Cisco CCNA.
Q & A
What is the main topic of the video?
-The main topic of the video is the spine-leaf architecture, its purpose, and how it functions within a data center environment.
What is a spine-leaf architecture?
-A spine-leaf architecture is a network design that separates the access layer (leaf) from the distribution and core layers (spine), allowing for scalability and efficient connectivity in data centers.
Why is spine-leaf architecture commonly used in data centers?
-Spine-leaf architecture is used in data centers due to its scalability, efficient use of resources, and the ability to provide high-speed connectivity between different hosts and virtual machines.
What are the roles of 'spine' and 'leaf' switches in this architecture?
-In spine-leaf architecture, spine switches provide the high-speed backbone or interconnects, while leaf switches are responsible for the direct connection to end devices such as servers and hosts.
How does the video demonstrate the connectivity between different racks in a data center?
-The video demonstrates connectivity by showing how switches, referred to as spine and leaf switches, are used to connect different racks, ensuring communication between virtual machines and hosts across various racks.
What is the purpose of having multiple paths for traffic between racks in the spine-leaf architecture?
-Multiple paths provide redundancy, fault tolerance, and the ability to use multipathing for increased throughput and load balancing in the network.
What is a VLAN and how does it relate to the spine-leaf architecture?
-A VLAN (Virtual Local Area Network) is a logically separate network within a physical network. In the spine-leaf architecture, VLANs can be extended across different physical racks using VXLAN (Virtual Extensible Local Area Network) technology.
What is VXLAN and why is it used in spine-leaf architectures?
-VXLAN is a network protocol that allows for the extension of VLANs across a network by encapsulating layer 2 frames within layer 3 UDP packets. It is used in spine-leaf architectures to create logical layer 2 segments across different physical locations.
How does the video explain the concept of tunneling in the context of VXLAN?
-The video explains tunneling as the process of encapsulating layer 2 frames within a new packet structure, which is then forwarded over the network. This allows devices in different physical locations to be part of the same logical VLAN.
What are the benefits of using VXLAN in a data center environment according to the video?
-The benefits of using VXLAN in a data center include the ability to logically place devices in the same layer 2 broadcast domain regardless of their physical location, efficient use of IP addressing, and the capability for load balancing across multiple paths.
How does the video illustrate the encapsulation and decapsulation process in VXLAN?
-The video illustrates the process by showing an example of a ping request and its journey through the network. It demonstrates how the original layer 2 frame is encapsulated with a VXLAN header and forwarded over the tunnel, and then decapsulated at the receiving end to be processed within the local VLAN.
Outlines
🌟 Introduction to Spine Leaf Architecture
Keith Barker introduces the concept of spine-leaf architecture, explaining its purpose and relevance in data centers. He begins by visually representing a data center with racks of gear, each containing hosts that support multiple virtual machines (VMs). Barker illustrates the need for networking between VMs on different hosts and introduces the idea of top-of-rack switches to facilitate this connectivity. He also highlights the importance of avoiding single points of failure by using multiple switches and discusses the concept of layer 2 and layer 3 networking within this architecture.
🔌 Understanding Spine and Leaf Layers in Data Centers
The video script delves deeper into the spine-leaf architecture, explaining the roles of top-of-rack (ToR) switches, also known as leaf switches, and spine switches. Barker clarifies that ToR switches are part of the leaf layer, while spine switches connect the leaf layer, providing inter-rack connectivity. He demonstrates how adding a new rack involves connecting a new ToR switch to the existing spine switches, ensuring full connectivity without direct connections between leaf switches. The script also touches on the use of VLANs and the challenges of extending them across layer 3 networks.
📡 VXLAN: Bridging Layer 2 Networks Across Layer 3 Boundaries
Barker introduces Virtual Extensible Local Area Networks (VXLANs) as a solution to extend VLANs across a layer 3 network. He explains the concept of VXLAN identifiers (VNIs or VN IDs) and demonstrates how devices in different physical racks can be part of the same logical VLAN through VXLAN tunnels. The script describes the encapsulation process within VXLAN, where layer 2 frames are re-encapsulated with a VXLAN header for transport over layer 3 networks. Barker also discusses the configuration of VXLAN on leaf switches and the use of tunnel endpoints (VTEPs) for tunneling traffic.
💻 Practical Demonstration of VXLAN Tunneling in Action
The script transitions into a practical demonstration of VXLAN tunneling, showing the IP addressing scheme for tunnel endpoints and the routing table for efficient path selection. Barker explains the configuration of VLANs and VXLAN segments on leaf switches and how traffic is encapsulated and forwarded within a VXLAN network. He performs packet captures to visually demonstrate the encapsulation and de-encapsulation process during a ping and SSH session between servers in the same VXLAN but different physical locations.
🔄 Exploring Load Balancing and Traffic Distribution with VXLAN
Barker investigates the load balancing capabilities of VXLAN by examining the traffic distribution across multiple layer 3 paths provided by the spine switches. He conducts tests to validate the equal cost multipathing and shows how traffic is balanced between different interfaces on the leaf switches. The script provides a step-by-step guide on monitoring interface output rates to observe the load balancing in action, demonstrating the flexibility and efficiency of VXLAN in handling network traffic.
🏭 Concluding Thoughts on Network Topologies and VXLAN in Data Centers
In conclusion, Barker recaps the spine-leaf model and its advantages for data center networks, emphasizing the quick communication between devices on different hosts. He contrasts the hierarchical model suitable for campus networks with the spine-leaf model preferred for data centers. Barker also highlights the benefits of leveraging VXLAN to create logical layer 2 networks across a layer 3 infrastructure, wrapping up the video with a reminder of the importance of these concepts for those studying for the Cisco CCNA certification.
Mindmap
Keywords
💡Spine Leaf Architecture
💡Data Center
💡Hosts
💡Virtual Machines (VMs)
💡Networking
💡Layer 2 and Layer 3 Switching
💡Fault Tolerance
💡VXLAN (Virtual Extensible Local Area Network)
💡VTEP (VXLAN Tunnel Endpoint)
💡Multi-Pathing
💡Equal Cost Multi-Pathing (ECMP)
Highlights
Introduction to spine-leaf architecture in data centers and its purpose.
Explanation of the typical setup of racks and hosts in a data center environment.
Description of virtual machines and their networking within and across physical hosts.
Illustration of the need for networking between virtual machines on different hosts.
Introduction of top-of-rack (ToR) switches and their role in providing connectivity.
Discussion on the importance of fault tolerance in switch deployment.
Explanation of multi-layer switching capabilities for both Layer 2 and Layer 3 forwarding.
Benefits of spine-leaf architecture, such as equal cost paths and fault tolerance.
Introduction to spine switches and their connectivity to ToR switches.
Concept of extending VLANs over a network using VXLAN (Virtual Extensible Local Area Network).
Mechanism of VXLAN tunneling to logically place devices in the same broadcast domain.
Demonstration of VXLAN traffic capture and analysis.
Explanation of how VXLAN encapsulation and de-encapsulation work in a data center.
Practical demonstration of load balancing across multiple spine switches.
Discussion on the impact of spine-leaf architecture on network design and topologies.
Comparison between hierarchical and spine-leaf models for different network environments.
Conclusion summarizing the importance of understanding spine-leaf architecture and VXLAN for network professionals.
Transcripts
[Music]
hello and welcome my name is keith
barker and in this video you and i get
to take a look at the spine leaf
architecture to identify first of all
what is it why do we need it or want it
and then third to take a look at it in
action and a spine leaf architecture is
very typical inside of a data center so
let's go ahead and draw some racks of
gear at a typical data center so let's
label these as rack number three and
rack number four and rack number five
and rack number six and let's imagine
inside those racks they've got some
amazing hosts let's go ahead and put
four of them in each of these racks
and then we'll go ahead and label those
as host one for the individual hosts
that are in this rack of gear and host
two and host three and host four and the
same for the other three racks here in
our little mock data center
and let's also imagine that each of
these servers these hosts are supporting
multiple virtual machines maybe a dozen
or more virtual machines on each host so
let's measure that vm 11 is running on
host number one
and another vm let's call it vm
12 is running on host number two inside
of this one physical rack rack number
three now if these two vms were running
inside the same host they could have
logical virtualized networking to allow
them to connect to each other but if
this vm11 is on host one and this vm is
on host number two we need to have some
networking between them so on each of
the racks we could add one or more
switches that allows that connectivity
and so i'll add one of those switches to
the top of each of these four racks and
let me call this one switch 3.
i'll call this one switch four
and this one switch five
and this one's switch six and then each
of these physical servers would have
connectivity up to that switch very
likely more than just once for some
fault tolerance and some additional
throughput so i'll put some cables there
it's also very likely we'd have a couple
of these switches at the top and that
way if a single switch failed we
wouldn't have a single point of failure
so we draw the rest of the connectivity
in here
let's also imagine here over in rack 6
that we have vm
that's running on host 3 and vm 34
that's running on host 4. and very
similar to what was happening over here
in rack 3 here on rack 6 if vm33 needs
to communicate with this vm 34 because
they're on different hosts it would use
the networking provided between those
two hosts to facilitate that and we're
also gonna have some virtualized
networking on each of the hosts so here
on host three if vm 33 and let's say vm
23 we're on the same exact host those
two vms using logical networking in that
host could communicate with each other
but anytime we have to go between two
physical servers
we're going to need some networking that
provides connectivity between those
servers so are you ready for our next
challenge here it is what if this vm
bm11 which is currently being hosted by
host number one here on rack three what
if it needs to communicate with vm33
over here which is running on host three
on rack six
once again we're gonna need some
physical connectivity in this example we
need some connectivity networking
connectivity from the servers and the
hosts here in rack 3 to the hosts over
here in rack 6. so let's add some
switches that can allow that
connectivity let's go ahead and put a
couple in here and let's call this
switch 1 and switch 2. and when i'm
using the term switch here i also want
to imply multi-layer switching there are
devices that can do layer 3 forwarding
based on ip addresses and layer 2
forwarding based on mac addresses
depending on how they've been deployed
so for the connectivity from switch 3 to
these higher layer switches let's go
ahead and use a red color and let's
represent that red is connections that
are using layer 3. so it'd be an ip
network associated with this segment
between switch three and switch one and
for fault tolerance we'd wanna go from
switch three to this guy two maybe yet
another layer three subnet and then from
switch four to switch one and switch
four to switch two and then five to one
and five to two and then
six i'll go over the top here six to one
and six to two and for these networks we
could use a slash 30 or slash 31.
sometimes that's allowed so it's not
going to tie up a big block of ip
addressing but i wanted to represent the
connectivity between these upper level
switches and these top of rack switches
that's going to be layer 3 connectivity
so here's the great news if host 1 needs
to forward traffic on behalf of his vm
11 over to vm33 that traffic from host 1
would go through this top of rack switch
and then it has two paths it could go
through switch one as a routing decision
and then with switch one's connectivity
over to rack six that traffic could be
forwarded down to host number three so
that'd be one path i'll go ahead and
draw that here there'll be one path and
another path would be going through
switch two and that would look like this
so as a result of this design here are
some of the benefits we have two equal
cost paths to get from this rack here
rack number three over direct number six
so we have equal class paths we could
use multi-pathing also we have some
fault tolerance if we lose one of these
switches at the top here switch one or
switch two the other one still is
providing connectivity and we still have
communications between these two racks
so let's go ahead and remove that path
for a moment and let's talk about some
interesting names they have for these
switches
for these switches that are placed at
the top of the rack and this is for
convenience they don't have to be at the
very top of the rack but they're often
referred to as t
o r or top of rack switches also this
layer right here these topper rack
switches are also making up what's known
as the leaf layer so if you take a look
at the spine and leaf architecture or
spine leaf design these top of rack
switches are literally the leaf portion
of our spine leaf design and for these
switches up here i'll go ahead and put a
little dividing line for these switches
up here these are considered to be the
spine switches that are providing
connectivity between the top of rack
switches so when somebody talks about a
spine leaf design or a spine leaf
architecture they're talking about
exactly this
and if we were to add another rack let's
go ahead and add another rack we'll call
this rack number seven this new rack
would also have a top of rack switch
we'll call it switch number seven and it
would have connectivity up to the spine
switches so in our case i just have two
spine switches here but if we had 10
spine switches we would have 10
connections from this top of rack switch
one for the connection to each of our
spines another thing to note is that the
spine switches themselves they don't
have cross connects between them going
horizontally also the top of rack
switches also do not have connectivity
between them however each spine switch
has a connection to each and every leaf
switch
and every leaf switch has a connection
to each and every spine switch and what
this facilitates is connectivity between
the hosts and any of these racks and the
host in any other racks on behalf of the
vms that may need to communicate with
each other so vm11 here can communicate
with vm33 maybe this guy's on the 10.3
subnet and over here this vm is on the
10.6 subnet and they have a layer 3
routed path they can use to reach each
other however what if we wanted to put
vm11 here which is over here in rack
three and vm 33 over here on rack six
what if we wanted to logically place
them in the same layer two network the
same broadcast main the same vlan well
one of our challenges we don't have
trunks up here that are connecting all
the switches together but rather what we
have is layer 3 connectivity so how do
you extend a vlan over a network if you
have routers in the way that are doing
layer 3 forwarding and the answer is
using v
x lands
and that's what i'd like to talk with
you about right now that can leverage
this kind of architecture with a spine
leaf design and allow devices over here
in one rack of gear to be the same
logical vlan as a vm that's in a
completely separate rack even though
they are not directly connected at layer
two so this represents the leaf layer of
our switches our top of rack switches
and this represents our spine layer
right here
unless we have full connectivity between
each spine device and all of the leafs
and each leaf has full connectivity to
all the spine switches so we can imagine
that each of these leaf switches
represents a separate rack and i've also
placed some devices on our topology that
we can play with and look at the results
and as a reminder all the connectivity
between the spine and the leaf it is all
layer three so there's no native
extensions of a vlan for example from
over here in rack three over here to
rock six but what we are gonna do
instead is we are gonna use v
x lan and that's an acronym for virtual
extensible
local area network so they took the
second character there for the x to make
the acronym vxlan effectively we can
choose to put devices even in separate
physical racks even though they're not
connected directly at layer 2 we can
logically place them in the same vlans
by using this concept called vxlan and
here's how we're going to pull it off
for these vxlans we're going to create
some identifiers for the vxlans these
are commonly referred to as v and i's
i've also seen them as vn ids so let's
imagine we want to create an identifier
of six seven eight three which happens
to be my cci number and let's imagine
that we want these two devices here on
this left hand side of our network we
want them to be a part of that vxlan of
6783 and we want these two devices over
here on the right hand side of our
topology also to be in that same vxlan
of 6783
so as far as the actual subnet address
that we're going to use with that let's
use a 10.9.0.0
with a 24-bit mask and let's imagine
that pc6 is at dot 6 and the server here
is at 106
and over here on the left hand side this
guy is at dot three and the server is at
dot one o three all of it in the ten
nine zero address space now first glance
you might think well how in the hell in
the world is that gonna work i've got
these devices on the 1090 network over
here i've got these other devices on the
1090 network over here how do i get the
traffic across these racks in the same
vlan and the secret and the answer to
that question is by doing tunneling
we're going to set up tunnels now these
tunnels can be manually set up and
statically configured also they can be
dynamically discovered there's lots of
different ways of doing that but at the
end of the day we're going to set up
tunneling between our leaf devices so
i'm going to put my tunnel in this color
right here i'm going to put a tunnel in
from leaf 3 over to leaf6 and i'll kind
of fill it in here
and one interesting thing about a tunnel
is that it has two end points
we're gonna have one end point over here
on leaf three and another end point over
here at least six and these end points
are referred to with vxlan as
vtep which is a vlan tunnel endpoint so
with an ipsec tunnel we're taking the
original payload encrypting it and then
we're putting it inside of a whole brand
new packet and then it shipped across
the network to the other peer who then
decrypts it and a very similar concept
works like that with vxlan except
instead of encapsulating for the benefit
of encryption we're encapsulating for
the benefit of forwarding traffic to
make these devices in this vxlan believe
they're on the same subnet and it's
accomplished by taking the original
payload from layer two up then
re-encapsulating that inside of a packet
which we're gonna forward over the
tunnel and i think a fantastic example
would be this let's imagine that this pc
right here does a ping to the address
over here on dot three which is
10.9.0.3
so that initial arp request is a
broadcast so originally the broadcast is
an arp request and the source mac
address would be pc6 mac address and the
destination would be the broadcast
address at layer two and here's what
leave six will do it'll take that
original request like this including the
layer two header and it's going to
re-encapsulate it into a whole new
datagram that looks like this so this is
our payload it's going to have a vxlan
header that's going to identify the
vxlan it belongs to 6783 in this case
and then at layer 4 it's going to be
using udp and then layer 3 is going to
have the source address of switch 6
whatever the v tip is for switch six and
for the destination ip address it's
gonna be the vtep or the logical end of
the tunnel that switch three is
supporting so the destination is gonna
be
switch
three and then the layer two headers are
gonna be swapped out as that packet is
forwarded across the network so the
actual path of the traffic would be
going from this leaf to either switch
two or switch one who would then forward
it down to leave three who would then
take a look at the header and say oh my
goodness i see what this is d
encapsulate it and then simply forward
the original layer two frame down to
this vlan down here at which point pc3
would see it say oh there's a request
that darp request from pc6 it would
respond and then that traffic would once
again be re-encapsulated at least three
shipped logically over the tunnel where
leave six would de-encapsulate that and
then for the response down to pc6 so the
actual literal traffic is being routed
over the network however the logical
path is through the tunnel between the
two leaf switches so the benefit of
vxlan in the data center is that we can
logically place devices in the same
layer 2 broadcast domain even though
those devices may be separated by one or
more layer 3 routers on the path to get
there again it's all done through the
tunneling mechanisms that vxlan uses so
i thought to myself wow it'd be really
cool if we could like you know take a
look at the packets involved when vxlan
is in use and i just so happen to have a
topology it's the one we're looking at
that i'd like to go ahead and demo for
you right now and here's the lay of the
land regarding ip addressing for the
tunnel endpoints here on lead three i'm
going to be using 10.10.10.3
as the tunnel endpoint here only three
and the other end of that tunnel for
this demonstration on leaf six is going
to be 10.10.10.6.
so if we looked at the routing table
from leaf6's perspective regarding how
to reach the other end of the tunnel
it's going to have two paths one that
goes this way one goes that way and as a
result of it having multiple paths it
can use equal cost multipathing which is
fantastic and that way we can send some
traffic this way and some traffic that
way and get more throughput as we're
forwarding traffic over the tunnel on
behalf of our hosts and vms so here from
the perspective of leaf six if we do a
show nve and a question mark and let's
go ahead and take a look at piers
so here showing that we have up here at
10 10 10 3 that's the leaf 3 switch
and if we do a show nve peers and detail
here it shows us that the peer state is
up it's also specifying that the virtual
network identifier that it's supporting
is 6783 and that is the same virtual
network identifier that we're using so
we did a show iprout for that specific
route of 10.10.10.3
this will help us confirm that we have
multiple paths to get there one going
out ethernet one slash one the other
going out ethernet one slash two also
let's do a show run and let me share
with you a few tidbits from this
configuration so currently i have vlan 9
that's the local vlan on this leaf
switch and i've associated with it the
vn segment the virtual network segment
of that vxlan id of 6783 so the play by
play is if we have the switch we have a
device connected to let's say port 1 7
which we do right here if we assign that
port as an access port in vlan 9 based
on this configuration that client is
also logically going to be in the vxlan
6783 so if the client sends in a
broadcast like an arp request this
switch will take that re-encapsulate it
forward it over to the pier who will
de-encapsulate it and then forward it
down to the other devices that it has
locally which are also associated with
the vxlan of 6783 also just to confirm
who to show interface status
and let me scooch this over just a
little bit here showing us that ethernet
1 and 2 these interfaces here are layer
3 connections up to the spine layer also
part 6 and 7 which i have configured
right here
they are configured as access ports in
vlan9 which based on our configuration
is also associated with a virtual
network id of 6783 and switch 3 has
similar treatment for ports 6 and 7 over
here on its side as well so here's what
i like to do before we send traffic from
pc6 or server 6 over here to server 3 or
pc3 let's do some captures on e1 slash 1
and 1 2 so we can actually see the vxlan
traffic and the re-encapsulation that's
done as part of the tunneling
so i'll go ahead and start those
captures so we'll capture one slash one
and click ok we should see some ospf
hello messages and other stuff that's
going on great great great it's working
all right let me go ahead and make that
a little bit smaller for the moment and
let's also capture on e one slash two so
go ahead and right click again click on
capture we'll specify e one slash two
and we'll click on okay all right make
that also a little bit smaller
and move it over here a little bit to
the right just to make sure there's some
activity on it good good good so we can
bring out the pc or the server let's go
and bring up the server
all right so this represents the server
we'll just double check its idp address
real quick by bringing up a command line
and let's do ifconfig for ethernet 0 and
sure enough
10.9.0.106. this is hanging off of
switch 6.
so if we are going to do a ping over to
let's go ahead and ping server 3 on the
left and he is at 10.9.0.103
and press enter
i'm always a little shocked when it
works so well the first time let's do a
ctrl c there so what's happening behind
the scenes is that this switch right
here switch 6 is taking those requests
wrapping them up shipping them over the
tunnel and then switch three is
de-encapsulating them forwarding them
down and then for the replies back the
reverse process happens also while the
captures are still running let's do a
couple more things let's go ahead and do
an ssh session let's do an ssh over to
10.9.0.103
and press enter okay it's asking me if i
don't trust the fingerprint i'll say yes
and what is the password over there is
it this let's see here um is it that
nope is it that
nope okay really doesn't matter too much
i just want to make sure we can capture
some traffic going back and forth
between this server here on the right
and this server over here in the same
vxlan but yet in a separate part of the
network so let's go ahead and stop our
captures and we'll take a look all right
we could probably grab either one and
get some traffic let's go ahead and
start with e 1 2 that interface and take
a look and let me make the font a little
bit bigger so let's start with a ping
and let's go ahead and do a filter
display filter looking for just icmp
traffic and here we go right here so
here we have traffic from 109.0.106
that was the server on the right pinging
10.9.0.103
that's the server on the left but check
it out because we captured the traffic
as it was going over the logical tunnel
look what it did so let's take a look at
what happened here so the ethernet frame
is being sent and this would be coming
from
the mac address associated with the
ethernet one slash two interface on
switch six going to the layer two
address of the next top router so that's
the outmost ethernet header and inside
of the ethernet header is saying the
next protocol is hexadecimal 800 which
is ipv4 so here in the ipv4 header the
source is the tunnel endpoint address
for switch 6 and the destination is the
tunnel endpoint address for switch 3.
and then the header it says hey the next
protocol is udp and so here's the udp
header that was being used for the
re-encapsulation of this tunnel traffic
and then after the udp header then we
have a vxlan header right here and here
it's identifying the vxlan network
identifier the 6783 and that way when
the receiving switch sees it says okay
great it can de-encapsulate it and then
forward it appropriately to the devices
in the vlans associated with that vxlan
so once switch 3 receives this it's
going to strip off all this information
and what's going to forward is the
original layer 2 header with the source
mac address of server 6 on the right
hand side and the destination mac
address of server 3 on the left hand
side and then in this ethernet header it
then points to the next protocol being
ipv4 and then the payload for this
packet was an icmp echo request which is
right here so from the server's
perspective they don't know that all
this encapsulation de-encapsulation
happened all they think is that hey
we're two devices on the same subnet on
the 10.90 subnet and it feels like to
these devices it feels like we're right
next to each other so here's saying the
response wasn't found but it's very
possible that the response could have
come back on the other path because
there's two layer three paths provided
by the spine so the response could have
come back over the other interface all
right so let's go take a look at ssh we
also did an ssh session or we started
one let's go ahead and do a display
filter for that and i'm going to go
ahead and right click and go to follow
and say let's follow the tcp stream and
that's going to close that and give us
just a filter for that session so if we
pick one of these as being initiated by
the server at 106. let's go ahead and
grab this right here everything in these
first one two three four five entries
here are based on the re-encapsulation
and setting the traffic through the
tunnel so all this is for the benefit of
the two endpoints of the tunnel and then
when the receiver gets it switch three
it would de-encapsulate it and it would
simply forward the layer two header
which from the mac address it looks like
it's sourced from the mac address
associated with server six going to the
mac address associated with server three
and then the next protocol is ip and
then layer four is tcp and then we have
the payload of the original request
which was an ssh message so is it
possible to have a tcp header and a udp
header the answer is yes here we have
udp header that was part of the
encapsulation and the tunnel traffic and
then after that got stripped off the
actual real protocol trying to be used
by the client was using a layer 4 tcp
in association with ssh so let me go
ahead and close those captures and one
last little test i'd like to do is just
to validate that we're actually doing
load balancing across the two interfaces
as traffic goes from server six over to
server three so here on leaf six we do a
show iprout to the other end of the
tunnel which is 10.10.10.3
we have two equal cost paths and let's
get a little bit creative and let's do a
show
interface
ethernet one slash one and let's pick
out a few elements that might be
relevant here so we can filter them out
with a pipe and how about this let's
let's go ahead and filter on the 30
second output rate right there so i'm
going to copy that to my buffer
and let's do a show for that interface
we'll do a pipe and we'll put in some
quotes paste that in and end quote and
enter
and it says keith you might want to put
it include there
i thought it didn't that count all right
sounds good and then we'll do it again
also for
ethernet one slash two so to be a little
creative let's do this let's do a show
interface ethernet one slash two
looking for just the output that
includes that line press enter and we'll
also do it for one one now we can go
back and forth we can actually see the
amount of traffic over the last 30
seconds that's currently being used so
currently it's about 80 bits per second
not very taxing so if we bring up our
server let's do this let's do a ping and
we'll say the size is 1000 bytes
and we'll ping the target at
10.9.0.103 that should put some load and
they'll just be a continuous ping and
that way we can look at the output here
and see approximately if it's doing any
load balancing so i'll go ahead and just
let that run the background and let's go
ahead and give that a moment and let's
look at ethernet one slash one and now
it's at two thousand bits per second and
one slash two is about eleven sixty
eight so we'll go ahead and give it a
few more moments and then we'll test it
again so both are being used it's not
exactly perfect but if we had more
clients it's very likely to be more
equally spread out there we go that's 3
100 and 3 300
amazing and we'll go ahead and do it
again
and now it's 47 and 35.
there's 5200 and 5500 so i also want to
verify just by stopping my ping i'm
going to do a control c to stop the
traffic and then then we'll give it
about 15 20 seconds and then we'll look
at it again and those numbers should be
dropping so it's been about 30 seconds
and so i'm going to go ahead and look at
both of them again so we have 480 bits
now per second on one slash one and on
one slash two we have about 96. so let's
go ahead and just verify that real quick
so there we go they're both coming down
so let's recap we have up here we have
switches that are at the spine layer and
there's full connectivity between each
switch at the spine layer and each
device at the leaf layer right here also
there's no cross connects so we don't
have cross connections between a switch
1 and switch 2 at spine layer no no no
we also don't have cross connects here
horizontally between any of the leaf
switches everything's going through the
spine we also took a look at the concept
of tunneling and using vxlans which
gives us the ability to take any devices
we want to pretty much anywhere here in
the data center and logically place them
on the same subnet and that's done by
tunneling the traffic between the
switches so the actual packets are being
routed over the spine but the logical
tunnel is going from one switch over to
the other switch so when considering
network designs and network topologies
in light of the cisco ccna i'd like you
to remember two major families one is
the hierarchical model which is the
three layer model we had a separate
video just on that and that's with the
axis layer distribution layer and the
core or if you want to smash the
distribution and core together as a
collapse core that would be the two tier
hierarchical model and that's really
great for like a campus network if we
however have a data center that we need
devices to be able to communicate very
quickly back and forth even if they're
on different hosts we'd very likely be
using a spine leaf model as we've just
discussed in this video and if you want
to on top of that leveraging vxlans as
well so thanks for joining me in this
video and i'll see you my friend in the
next live event or next video until then
be happy and treat everybody well i'll
see you next time
[Music]
all your hopes are never
تصفح المزيد من مقاطع الفيديو ذات الصلة
5.0 / 5 (0 votes)