Precision Time Protocol Profile for Data Center Applications & Related Network Requirements

Open Compute Project
18 Nov 202122:42

Summary

TLDRThe talk introduces the PTP (Precision Time Protocol) profile for data centers, aimed at improving time synchronization services. Michelle and Thomas from OCP TAB discuss the profile's development, its objectives, and its applications in enhancing distributed databases, network monitoring, and 5G synchronization. They also cover hardware advancements in time synchronization and the profile's technical specifications.

Takeaways

  • 😀 The presentation discusses the PTP (Precision Time Protocol) profile, which is a set of standards and options tailored for data centers.
  • 🔍 Michelle from Mera and Thomas from Nvidia led the effort in developing the PTP profile for data centers, working with many others in the community.
  • 📈 The PTP profile aims to define how to integrate various time synchronization technologies to improve data center applications.
  • 🕒 The core objective of the PTP profile is to enhance time synchronization service in data centers, moving from millisecond precision to microsecond precision with high reliability.
  • 📚 The PTP profile document, developed within the OCP TAB, explains various options and requirements for implementing PTP in data centers, including network topology, time error performance, and clock types.
  • 🔄 The profile addresses applications like distributed database systems, network monitoring, and 5G synchronization, aiming to increase transaction throughput and provide reliable air interface synchronization.
  • 🌐 The PTP profile specifies the use of hardware timestamping to achieve sub-10 nanosecond accuracy and resolution, moving away from the earlier software timestamping approach.
  • 🔄 The profile includes a reference model that decomposes the problem into three layers: time reference, network fabric, and server layers, each with specific roles in time synchronization.
  • 🔄 The profile defines a time error requirement of plus or minus five microseconds between any two servers within a data center, ensuring high precision in time synchronization.
  • 🔄 The PTP profile for data centers initially uses a model with only transparent clocks, but future work will explore a model with boundary clocks for more flexibility.

Q & A

  • What is the primary purpose of the PTP profile?

    -The primary purpose of the PTP (Precision Time Protocol) profile is to define a set of standards and options that can be tailored to meet the needs of data centers, improving time synchronization services and enabling new applications.

  • Who were the key contributors to the PTP profile for data centers?

    -Michelle from Mera and Thomas from Nvidia were the leading figures in the development of the PTP profile for data centers, along with contributions from other people and companies.

  • What is the main objective of the OCP TAB in relation to time synchronization?

    -The main objective of the OCP TAB (Open Compute Project Technical Advisory Board) in relation to time synchronization is to define a high-level time synchronization service across data center infrastructure, aiming to improve current applications or enable new ones.

  • What performance improvement is targeted by the PTP profile in data centers?

    -The PTP profile aims to provide two to three orders of magnitude better performance in time synchronization compared to current network timing protocols used in data centers, moving from milliseconds to microseconds precision with high reliability.

  • What are some applications that have been discussed within the PTP project group?

    -Applications discussed within the PTP project group include distributed database systems, network monitoring, and 5G synchronization. These applications aim to increase transaction throughput, measure network events more precisely, and provide reliable air interface synchronization.

  • How does the PTP profile help in maintaining the order of transactions in distributed systems?

    -The PTP profile helps in maintaining the order of transactions by ensuring that any committed timestamp is always in the past relative to a reference block, minimizing clock skew and thus improving the performance of distributed systems.

  • What is the significance of hardware timestamping in PTP implementations?

    -Hardware timestamping is significant in PTP implementations as it allows for more accurate time stamping closer to the hardware layer, reducing the impact of system noise, latency, and other factors associated with software timestamping. This leads to better accuracy and resolution in time synchronization.

  • What are the two models planned for the data center profile in PTP?

    -The two models planned for the data center profile in PTP are Model 1, which uses only transparent clocks and relies on network routing for failure recovery, and Model 2, which uses boundary clocks with each switch device running a boundary clock for processing messages hop by hop.

  • What is the time error requirement defined in the PTP profile for data centers?

    -The time error requirement defined in the PTP profile for data centers is that the difference between any two servers' PTP clocks within a data center should be within plus or minus five microseconds.

  • What is the call to action for the PTP profile development in data centers?

    -The call to action is to invite people to join the work stream number two to develop the second version of the profile, focusing on the use of boundary clocks, security aspects such as authentication and verification of PTP messages, and load balancing of the PTP unicast sessions.

Outlines

00:00

🗓️ Introduction to PTP Profile for Data Centers

The speaker introduces the Precision Time Protocol (PTP) profile, which is a set of standards and options tailored for data centers. Initially, there was no specific profile for data centers, but Michelle from Mera and Thomas from Nvidia led the effort to create one. The PTP profile document outlines various options for time synchronization and explains what should be avoided. The presentation aims to provide a walk-through of this profile, highlighting its importance in defining time synchronization services for data centers.

05:01

🔍 Applications and Objectives of PTP Profile

This paragraph discusses the applications and objectives of the PTP profile in data centers. The main goal is to improve time synchronization services to enhance current applications or enable new ones. Examples include distributed database systems, network monitoring, and 5G synchronization. The PTP profile aims to provide two to three orders of magnitude better performance than current network timing protocols, moving from milliseconds to microseconds precision with high reliability. The speaker also mentions the various work streams and the PTP profile specification.

10:01

🌐 PTP Profile Development and Industry Adoption

The speaker highlights the advancements in time synchronization distribution and how PTP has been adopted across multiple industries. Each industry has developed a PTP profile specification that defines the capabilities required for their specific use cases. However, the data center industry was previously missing a PTP profile, which has now been developed within the OCP TAB. The new data center PTP profile is a comprehensive document covering network topology, time error performance, clock types, communication modes, and more. It was released in September and is available for review.

15:02

📈 Time Error Requirements and Reference Model

This paragraph focuses on the time error requirements and the reference model for the PTP profile. The goal is to ensure that the difference between any two servers' PTP clocks within a data center is within plus or minus five microseconds. The speaker explains that each PTP clock in the servers must be within plus or minus 2.5 microseconds of a common reference, such as GPS or GNSS. The reference model is also discussed, which includes the time reference layer, network fabric layer, and server layer, detailing how time is recovered and passed on to applications.

20:03

🛠️ Hardware and Software Timestamping in PTP

The speaker discusses the evolution of hardware and software timestamping in PTP. Initially, hardware was not capable of drawing timestamps, leading to software timestamping, which had drawbacks due to system noise and latency. Modern implementations now use hardware timestamping, which provides sub-10 nanosecond accuracy and resolution. The speaker also covers the transition from two-step to one-step clock mechanisms, explaining how hardware timestamping allows for more accurate and reliable time synchronization.

🔄 PTP Profile Models and Future Developments

The final paragraph covers the current PTP profile models and future developments. Model 1, which uses only transparent clocks, is currently in use, allowing for accurate time synchronization through network routing. The next phase involves exploring Model 2, which uses boundary clocks with each switch device running a boundary clock. The speaker also mentions the need for failure mechanisms and the upcoming work on security aspects, authentication, and load balancing of PTP unicast sessions. A call to action is made for participation in the development of the second version of the profile.

Mindmap

Keywords

💡PTP Profile

The PTP (Precision Time Protocol) Profile refers to a set of standards and options tailored for specific applications or environments. In the context of this video, it is a document that explains how to configure PTP for data centers. The PTP Profile is crucial as it defines the options and standards to achieve high-precision time synchronization, which is essential for various applications in data centers.

💡Data Centers

Data centers are facilities that house computer systems and associated components, such as servers, storage devices, and networking equipment. In this video, data centers are the focus as the PTP Profile is being discussed in relation to their needs. The goal is to improve time synchronization within data centers to enhance performance and enable new applications.

💡Time Synchronization

Time synchronization is the process of ensuring that all devices in a network have the same time. This is critical in data centers for coordinating operations and ensuring the reliability of applications. The video discusses how the PTP Profile aims to improve time synchronization to the microsecond level, which is a significant improvement over current standards.

💡OCP TAB

OCP TAB (Open Compute Project Technical Advisory Board) is a group within the Open Compute Project that focuses on developing and maintaining standards for data center hardware and software. In the video, Michelle and Thomas from OCP TAB are leading the effort to develop the PTP Profile for data centers, indicating the importance of this standard in the industry.

💡Distributed Database Systems

Distributed database systems are databases that are spread across multiple sites, servers, or locations. The video mentions that one of the applications where improved time synchronization is beneficial is in distributed database systems. Better time synchronization can increase the throughput of transactions by ensuring that all parts of the system are in sync.

💡Network Monitoring

Network monitoring involves measuring and analyzing network traffic and performance. The video discusses how precise one-way delay measurements, enabled by better time synchronization, can improve network monitoring. This can provide better visibility into network events and help in troubleshooting and performance optimization.

💡5G

5G is the fifth generation of mobile networks, promising faster speeds and lower latency. In the context of this video, 5G is mentioned as an application where reliable air interface synchronization is crucial. The PTP Profile can play a role in ensuring that 5G networks maintain precise timing, which is essential for their operation.

💡Hardware Timestamping

Hardware timestamping refers to the process of capturing time stamps at the hardware level, rather than in software. This method is more accurate and less susceptible to system noise and latency. The video explains that modern implementations of PTP use hardware timestamping to achieve sub-10 nanosecond accuracy and resolution.

💡One-Step Clock

A one-step clock is a mechanism in PTP where the timestamp is directly written into the message itself, rather than using a two-step process. The video discusses the advantages of one-step clocks, such as avoiding out-of-order sequences and ensuring that each message is linked to its own timestamp, which is crucial for maintaining accurate time synchronization.

💡Boundary Clock

A boundary clock is a type of clock used in PTP networks that can act as both a master and a slave. The video mentions a future model for the data center profile that involves using boundary clocks, where each switch runs a boundary clock and processes messages hop by hop. This approach allows for more flexibility and can improve the reliability of time synchronization.

💡Best Master Clock Algorithm (BMCA)

The Best Master Clock Algorithm is used in PTP networks to determine the best source of time. In the context of the video, BMCA is mentioned in relation to the use of boundary clocks, where it helps decide which path to use for time synchronization. This algorithm is crucial for maintaining accurate and reliable time distribution in complex networks.

Highlights

Introduction to the PTP profile for data centers, highlighting its importance in tailoring standards and options for time synchronization.

Michelle and Thomas from OCP TAB leading the effort in defining the PTP profile for data centers.

The PTP profile document explains various options for time synchronization and what should be avoided.

Objective of the PTP profile is to define how to put together time synchronization components for data centers.

OCP TAB's focus on defining a high-level time synchronization service to improve data center applications.

Precision Timing Protocol (PTP) is chosen for its potential to provide two to three orders of magnitude better performance than current network timing protocols.

Discussion on applications like distributed database systems, network monitoring, and 5G synchronization as use cases for PTP.

Example given to illustrate the importance of accurate timestamps in transaction ordering and real-time operations.

Advancements in time and synchronization distribution, including silicon support for PTP and PTP-aware switches and routers.

Different industries have developed PTP profile specifications tailored to their specific needs.

Data centers have been missing a PTP profile, which OCP TAB has addressed by developing a PTP profile for data center applications.

Reference model for PTP in data centers includes time reference layer, network fabric layer, and server layer.

Time error requirement defined as plus or minus five microseconds for any two servers within a data center.

Hardware timestamping versus software timestamping discussed, with hardware providing sub-10 nanoseconds accuracy and resolution.

One-step versus two-step clock mechanisms explained, with modern implementations favoring one-step for accuracy.

Introduction of model one using only transparent clocks for the data center profile, and the upcoming model two with boundary clocks.

Failure mechanisms using IGP and best master clock algorithm in the data center profile.

Call to action for participation in the development of the second version of the profile, focusing on boundary clocks, security, and load balancing.

Transcripts

play00:04

ready

play00:05

okay so uh next talk is gonna be remote

play00:09

so i will just here like stand here to

play00:12

change the slides

play00:13

uh it's about the ptp profile which

play00:17

basically

play00:18

when it comes to ptp and profile is

play00:19

basically a set of standards set of

play00:22

options that you can take

play00:24

and when we started

play00:27

tap there was no

play00:29

profile pdb profile like tailored for

play00:31

data centers

play00:33

so uh

play00:34

michelle from mera and uh thomas from

play00:38

nvidia they worked plus a lot of

play00:42

other people here but

play00:43

you see these two names like uh they

play00:46

were leading the effort

play00:47

on uh various uh

play00:50

let's say a document the outcome of this

play00:52

was a document that basically explains

play00:55

various options that you can take and

play00:58

tailor it to your needs and also

play01:00

explains like

play01:02

what you shouldn't do perhaps so with no

play01:06

further ado let's

play01:08

get it started so do we have

play01:10

michelle or thomas online

play01:15

yes i am yeah hey michelle thank you

play01:18

okay so let's do it michelle

play01:21

all right uh

play01:23

very good um if we can go to the slide

play01:26

please yeah we are at the agenda slide

play01:31

okay super okay thank you yeah uh thanks

play01:33

a man yeah so um

play01:35

my name is uh michelle from uh i'm here

play01:39

with my colleague thomas to

play01:42

basically give you a quick short walk

play01:44

through of the

play01:47

first ptp profile

play01:49

for the data center

play01:52

industry or community that you know

play01:54

we've been working on within oct tab in

play01:58

the past year we've made a lot of great

play02:00

progress

play02:01

um and this is what we want to share you

play02:03

know with all of you

play02:05

um a lot of things were mentioned in the

play02:07

previous stocks for instance i heard you

play02:10

know

play02:11

gnss

play02:13

open time server time card unicast

play02:16

reliability and all that

play02:18

and the ptp profile like ahmad was

play02:20

saying in the introduction

play02:22

is really a no more than a document that

play02:25

basically defines how you put all of

play02:27

this together

play02:29

okay so this is really uh you know the

play02:31

core of the objective the ptp profile is

play02:34

to define

play02:35

you know how we put all of that together

play02:37

uh could we uh go to the next slide

play02:39

please i'm on

play02:44

so

play02:44

um when you look at the work you know

play02:47

that is being accomplished within the

play02:49

ocd tab one of the main objectives

play02:53

is to define you know very high level

play02:56

this aspect of a time synchronization

play02:58

service

play02:59

across you know the infrastructure of

play03:02

the data center to basically

play03:05

improve either a set of current

play03:07

applications or enable

play03:10

you know a new set of applications and

play03:12

we'll hear you know about some of these

play03:14

applications a little bit uh you know

play03:17

later this morning

play03:18

um

play03:19

right now uh you know within the ocd tab

play03:22

we've converged on using the precision

play03:24

timing protocol

play03:26

you know with with a high level

play03:27

objective to provide two to three orders

play03:30

of magnitude better performance

play03:33

you know when you compare to you know

play03:35

current network timing protocol

play03:37

infrastructure that is used in uh data

play03:40

centers uh

play03:41

you know uh today

play03:43

essentially we want to move from you

play03:46

know milliseconds

play03:48

right which is still a you know a small

play03:50

number in terms of time

play03:51

down to the micro session microseconds

play03:54

you know precision with a fairly high

play03:56

level

play03:57

you know amount of uh reliability

play04:00

and you know to to realize that um we've

play04:03

been working on multiple work streams

play04:05

that were uh presented the

play04:07

uh previously

play04:09

um and one of them relates to the ptp

play04:12

profile specification and this is what

play04:13

we'll introduce uh this morning myself

play04:16

and thomas

play04:18

we cannot see the slides

play04:21

yeah thank you all right

play04:23

so

play04:24

in slide three

play04:26

several applications you know in the

play04:28

past year have been discussed within uh

play04:30

you know uh

play04:32

the tapa project group

play04:34

through various presentations you know

play04:36

given by community members you'll find a

play04:38

link there with you know all of the

play04:40

recorded

play04:41

talks

play04:42

um very high level for instance

play04:45

there were a couple of talks on what we

play04:48

call distributed database systems where

play04:51

the objective there is to increase the

play04:52

throughput you know of transactions via

play04:56

the use of better clocks

play04:58

the second type of application that was

play05:00

discussed is relates to network

play05:03

monitoring which basically the objective

play05:05

there is to

play05:07

try and you know measure network events

play05:10

using what we call one-way delay

play05:12

measurements so

play05:14

if you make your one-way delay

play05:16

measurements more precise

play05:18

then you know in theory that should

play05:21

you know give you better visibility to

play05:23

what is happening you know with the

play05:24

events you know in the network

play05:27

uh there were a few talks also on 5g

play05:30

where the objective there um and this is

play05:33

a well-known you know use case to

play05:34

provide reliable

play05:36

you know air air interface uh

play05:39

synchronization and we'll hear a lot

play05:41

more on this subject in the telco uh

play05:44

stream this afternoon there are a couple

play05:46

of talks on that specific uh subjects

play05:50

um next slide please

play05:55

so let's take a a very high level

play05:57

example for instance to put you know

play05:59

this into uh frankincense context

play06:02

let's say you've got an application a

play06:05

that basically issues a write command

play06:08

you know to a client here called c1

play06:11

basically that client when it receives

play06:13

that request it chooses a time stamp

play06:16

let's say t1 that timestamp might be in

play06:18

the future and then it executes you know

play06:21

that rights to a set of replicas

play06:25

when all of this is completed and you

play06:27

know the right operation has been

play06:28

acknowledged

play06:29

the same application might for instance

play06:31

design you know to do a read but it does

play06:34

so in this example for instance through

play06:37

a different client

play06:39

uh that is called the c2

play06:42

so again c2 chooses a read time stamp

play06:44

that time stamp might be for instance in

play06:47

the future also that time step is you

play06:49

know time t2

play06:50

and it basically reads you know the

play06:53

object from you know one of the replicas

play06:55

in this example here just show uh

play06:58

through r3

play07:00

so here's basically you know a bit the

play07:04

dilemma

play07:05

if this timestamp t2 is greater than t1

play07:09

then the client c2 is going to be

play07:11

reading valid data right

play07:14

i mean it might take a bit of time to

play07:16

read the data

play07:17

right between the difference between

play07:18

where t2 is in comparison to t1

play07:22

but if it's in the future it's going to

play07:23

read you know the proper uh valid data

play07:26

but if t2 for instance is smaller than

play07:29

t1

play07:30

because of things like you know a clock

play07:32

skew

play07:34

the application will basically see

play07:36

um stale you know data

play07:39

even though the right you know what that

play07:42

was done by client c1 completed before

play07:45

you know uh

play07:47

the the read operation began

play07:50

you know when you look at things you

play07:51

know from an ordering or real-time

play07:53

ordering of operations uh or

play07:55

transactions

play07:57

so essentially what you want to make

play07:58

sure here in this example is that any

play08:01

committed timestamp

play08:04

is always in the past relative to a

play08:07

reference block

play08:09

and in some of those database systems

play08:11

maintaining the ordering of these

play08:13

transactions is very important but also

play08:16

making sure that you have very you know

play08:18

the smallest clock skew possible

play08:21

are basically drivers to increase you

play08:23

know the performance of these type of

play08:25

systems

play08:28

next slide please alan

play08:35

yeah thank you yeah so

play08:38

um there's been you know a lot of a

play08:40

significant advancement uh in the

play08:42

distribution of a you know a time and

play08:44

synchronization

play08:46

in the past you know certainly in the

play08:47

past decade

play08:49

primarily around things like we've heard

play08:51

this morning you know a silicon that

play08:52

supports pvp

play08:54

switches and routers also that supports

play08:56

you know that are ptp aware oscillators

play09:00

you know a linux ptp stack for instance

play09:02

test equipment all of which you know uh

play09:04

support uh

play09:06

pdp and because of that multiple

play09:08

industries i've adopted ptp as you know

play09:11

the protocol or the technology of a

play09:13

choice from any use case scenarios

play09:16

um

play09:17

and there's many you know uh ptp

play09:19

networks in operation

play09:21

in each of these

play09:22

industries and we'll go into some

play09:24

details in the next slide each of these

play09:27

industries has developed what we call a

play09:29

ptp profile specification

play09:32

which essentially defines the

play09:33

capabilities required to support a use

play09:36

case

play09:36

you know scenario for their particular

play09:40

uh

play09:41

industry so the profile really provides

play09:43

you information on how you implement

play09:45

things how you configure and how you

play09:47

will basically operate the ptp

play09:51

next slide please

play09:56

so this is basically a you know a table

play09:58

where today there is

play10:01

about half a dozen ptp profiles that

play10:04

exist in the industry today each having

play10:06

a different you know scenario telecom

play10:09

mobile professional uh you know a video

play10:12

power and so on

play10:14

um

play10:15

but one in the in the one industry

play10:18

that you know has been missing out

play10:21

you know from uh you know uh from this

play10:23

was the data center and this is what we

play10:25

did within ocp tab within you know the

play10:28

past year is to basically develop

play10:31

a dtp profile for the purpose of

play10:34

data center applications

play10:36

um

play10:38

the

play10:38

and this is the last row in that table

play10:40

the dtp profile is basically a 20

play10:43

you know plus page

play10:45

document it was contributed by um

play10:48

six different uh companies and it re

play10:52

it

play10:55

basically goes through

play10:57

and contains various requirements that

play11:00

for instance pertain to things like

play11:02

network topology

play11:04

uh what is the expected time error

play11:06

performance

play11:07

what are the type of clocks for instance

play11:09

and we've heard that a little bit

play11:10

earlier from japan this morning right

play11:13

whether it's transparent clocks versus

play11:14

boundary clocks what are the

play11:16

communication mode uh the pdb messages

play11:20

uh the message rates unicast

play11:22

communication and so on all of that is

play11:24

defined in that document and that

play11:26

document was released back in september

play11:28

and is available on the uh

play11:30

ocp uh contributions uh web page uh for

play11:34

anyone to go and

play11:35

read and digest yeah

play11:37

uh next slide please

play11:41

so one of the things we needed to in

play11:43

order to kick off this activity we

play11:44

needed to come up with a reference model

play11:47

so very high level we decompose the the

play11:50

problem statement into three layers

play11:52

what we call the time reference layer

play11:54

which contains you know your gnss your

play11:56

gps

play11:57

your rooftop antennas your

play12:00

open time server your time cards um and

play12:03

then the second layer is what we call

play12:04

the network fabric layer which is

play12:06

basically a large set of switches you

play12:09

know or routers for instance that are uh

play12:12

you know ppp aware uh for example in the

play12:15

first profile these are uh transparent

play12:18

clock

play12:19

capable switches

play12:21

and then the bottom layer is what we

play12:22

call the the server layer or where you

play12:25

have a very large ship of servers that

play12:28

are also pdp aware through what we call

play12:30

a clock that is called the ordinary

play12:33

clock

play12:34

this is the clock that its

play12:36

responsibility is to recover time and

play12:38

then pass it on you know to the

play12:40

application that's where you know the

play12:41

demarcation between the ptp network and

play12:45

the application

play12:46

uh

play12:47

resides

play12:49

next slide please

play12:54

my last

play12:56

slide before i pass it on to thomas

play12:59

a man next slide please

play13:01

okay thank you

play13:03

yeah

play13:04

um the second thing that we did

play13:08

and this is a you know quite important

play13:09

here it was to define the time error

play13:11

requirement okay what are we trying to

play13:14

meet here what is the expected you know

play13:16

our performance

play13:17

if you look in the right hand you know

play13:20

bottom corner of that slide you're going

play13:21

to see this number five microsecond

play13:24

this is what we basically

play13:26

came up with as a requirement

play13:29

essentially that says that if you would

play13:31

pick any servers

play13:33

within a data center and you could

play13:35

measure

play13:36

for instance their ptp clock

play13:39

that the difference between any two of

play13:41

these servers

play13:43

right would be within plus or minus five

play13:45

microseconds okay this is the absolute

play13:47

value here five microseconds within plus

play13:49

or minus five microseconds so one way to

play13:52

implement that requirement

play13:54

is to say

play13:56

that

play13:57

um

play13:58

each ptp clock that exists

play14:01

into for instance the servers

play14:03

has to be within plus or minus 2.5

play14:06

microseconds

play14:08

of a common reference that common

play14:10

reference being that reference that is

play14:12

at the top of that tree topology here

play14:15

for instance gps or gnss as an example

play14:18

so if you take an example for instance

play14:20

you take two machines

play14:22

or two servers

play14:24

you know one server is minus 2.5 and the

play14:27

other one is plus 2.5

play14:29

they have a difference in between the

play14:31

two of five microseconds

play14:33

and vice versa right um any combination

play14:36

there of

play14:38

of you know these values as long as it's

play14:40

within

play14:41

2.5 microsecond of a a common reference

play14:46

will basically satisfy that condition of

play14:49

uh

play14:50

five micro seconds between any two uh

play14:53

servers

play14:54

so um

play14:56

the um the ptp profile addresses you

play15:00

know how you put all of that together it

play15:02

talks a lot about you know some of those

play15:04

you know requirements talks a lot about

play15:07

you know uh

play15:08

a lot of these uh building blocks uh

play15:11

here and you know i i encourage you know

play15:14

all of you to go and download that

play15:16

document and you know uh read it uh

play15:18

through

play15:20

with that i'm gonna pass it to uh thomas

play15:23

we'll provide

play15:24

more information on some of the details

play15:26

of what's in the ptb profile quick

play15:28

reminder you have less than five minutes

play15:30

left

play15:33

okay thank you michelle

play15:35

so i'm going to try and

play15:37

and speed up time

play15:39

and try and make sure we reverse the

play15:41

clocks since we've only got a bit of

play15:43

time left

play15:44

so how do we actually achieve uh the

play15:46

target requirements that uh that was

play15:48

mentioned on the previous slide well

play15:50

first of all we're going to go through a

play15:51

bit of

play15:52

what goes on in the hardware of switches

play15:55

in order to provide accurate time

play15:57

stamping

play15:58

so historically uh when the first

play16:00

generations of the

play16:02

ptp standard came out back in 2002

play16:06

most of the hardware wasn't capable of

play16:08

drawing

play16:09

timestamps so it was a software model so

play16:12

software

play16:13

timestamping was the way forward at that

play16:15

point in time but that has a number of

play16:17

drawbacks because in the case of

play16:19

software timestamping everything is done

play16:21

well obviously in software which means

play16:23

it will

play16:24

be

play16:26

impacted by

play16:27

system noise from the operating system

play16:30

latency through the pipeline scheduling

play16:32

and everything else so as you can see

play16:34

those ptp message were carried from the

play16:36

interface up through the fi the mac the

play16:38

a6 to the operating system where the ptp

play16:41

stack itself would reside so the ptp

play16:43

stack the so-called virtual disciplined

play16:47

clock and the time stamping all occurred

play16:49

in user space

play16:51

with hardware timestamping

play16:54

we actually went into the opposite

play16:55

direction which is what we want we want

play16:57

to be able to timestamp as close as

play16:58

possible to

play17:00

the hardware layer i defy

play17:03

so modern implementations will actually

play17:06

do the timestamping itself in the phi

play17:08

and that's also where we reside the uh

play17:11

phc the ptp hardware clock

play17:13

the stack itself of course is still

play17:15

running in user space

play17:16

but that actually means we are able to

play17:19

have modern implementations that are

play17:21

able to do sub 10 10 nanoseconds uh

play17:24

accuracy and resolution in those

play17:26

environments

play17:27

next slide please

play17:30

yeah so the other side of the uh

play17:33

the story is beyond the hardware versus

play17:35

software timestamp is we have one uh or

play17:38

one step versus two-step clock and again

play17:40

for

play17:41

historical reasons the original

play17:43

implementations

play17:44

were not capable of

play17:46

not only drawing hardware timestamps but

play17:49

even

play17:50

due to that we were using a two-step

play17:52

mechanism so since we're doing software

play17:54

implementations we were using a two-step

play17:56

software environment which meant that

play17:59

when you actually draw a sync message

play18:02

from your uh from your source uh you

play18:05

were not able to have the accuracy to

play18:07

actually write the uh

play18:09

the timestamp in the message itself so

play18:11

you would send what we call a follow-up

play18:12

message

play18:14

now

play18:15

that was also the case because uh the

play18:18

message rates or i should say actually

play18:20

the interface rates um the timing is

play18:23

available to encode that timestamp into

play18:26

the

play18:27

into the interface at that speed uh was

play18:30

not uh was not compatible with the

play18:32

implementations these have gone away i

play18:34

mean in 2022 we can do hardware

play18:37

timestamping uh directly at uh you know

play18:40

100 gigs and beyond so this is uh this

play18:43

problem is moved away the other thing

play18:45

also with one step is that guarantees

play18:47

that each message is linked to its own

play18:50

timestamp what you want to avoid is that

play18:52

the sync message would take one path and

play18:53

the follow-up would take another path

play18:55

and that could happen if you've got you

play18:57

know equal

play18:58

cost multi-path across multiple links

play19:01

and you actually pack it spraying those

play19:03

messages which means you could have out

play19:05

of order sequences

play19:07

next slide

play19:09

yeah

play19:11

so where am i

play19:12

uh no i think there was a slide in

play19:14

between no

play19:16

yeah thank you

play19:18

so um having said that about uh the

play19:22

one versus the two-step clock

play19:25

we also have two models that are planned

play19:27

for the data center profile right now

play19:30

the model that michelle was talking

play19:31

about is model one which exists and uses

play19:33

only transparent clocks so as we've

play19:35

mentioned uh this means that we can by

play19:39

using the one step hardware clock we can

play19:41

avoid uh spraying and we're actually

play19:44

using network routing for the

play19:47

failure recovery so whatever path there

play19:49

is between

play19:50

the oc in the grand master through the

play19:52

chain of switches

play19:54

your igp will actually be sorting out

play19:57

making sure those messages arrive

play19:59

between one end and the other

play20:01

in the next phase of the development of

play20:03

the data center profile we're looking at

play20:05

model 2 which is currently a suggestion

play20:07

using a boundary clock and in this case

play20:10

every single

play20:11

switch device runs a boundary clock and

play20:14

that boundary clock will ultimately be

play20:16

processing messages hot by hop

play20:19

since it'll be disciplining its own

play20:21

phc and it will become the

play20:24

source for the device that is downstream

play20:27

what we're doing is we're actually

play20:28

comparing the data set that is provided

play20:30

from the pdp announce messages and the

play20:33

failure mechanisms is using the bnca the

play20:36

best master clock algorithm to decide

play20:38

which path is being used between those

play20:40

devices

play20:41

again

play20:42

in earlier implementations of boundary

play20:44

clocks there were issues related to

play20:46

settling time and

play20:48

oscillation

play20:49

that has been greatly reduced in modern

play20:51

implementations but it is

play20:53

not as 100 clean cut as it is with a

play20:57

transparent clock

play20:58

and this model you can use one or two

play21:00

step also which gives you flexibility

play21:02

and this is our next work item for the

play21:04

data center profile

play21:06

uh thomas uh we have uh less than a

play21:09

minute left so yeah

play21:11

i'm i'm getting as fast as i can yeah so

play21:13

failure mechanism

play21:15

um it's the same model as was talked

play21:17

about before we've got the igp

play21:20

we're carrying it

play21:22

those messages across the transparent

play21:23

clock and the endnote itself will figure

play21:26

out from the data set

play21:28

which grand master he wants to use so

play21:30

it's uh it's pretty straightforward and

play21:32

it's dictated by the data set

play21:36

the next slide

play21:37

and nearly at the end

play21:39

we have the data center profile which

play21:41

gives you a recap which you can read on

play21:43

screen

play21:44

of the different attributes are provided

play21:46

we know that all devices need to be time

play21:48

aware we know we can run over

play21:50

different transports we've got default

play21:52

message rates to define this is what you

play21:54

have in profiles traditionally

play21:56

we're using v6 uh unicast as the main

play22:00

mechanism here

play22:01

and we have different levels of accuracy

play22:04

and clock classes they're defined so

play22:05

we'll go to the last slide

play22:08

which is the call to action

play22:09

in the 10 seconds i've got less

play22:12

so

play22:12

we want or we're looking for people to

play22:15

help us out with the work stream number

play22:16

two to develop the second version of

play22:18

profile dimension the boundary clock

play22:20

we're looking at security aspects how do

play22:22

we want to

play22:23

add authentication and verification of

play22:25

those ptp messages and the load

play22:27

balancing of the pdp unicast sessions so

play22:30

please come and join us in this effort

play22:32

and

play22:33

we're looking forward to driving this

play22:35

forward

play22:36

thank you michelle and thomas

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
PTP ProfileData CentersTime SynchronizationDistributed DatabasesNetwork Monitoring5G SynchronizationHardware TimestampingSoftware TimestampingOscillatorsBoundary Clocks
هل تحتاج إلى تلخيص باللغة الإنجليزية؟