DevOps Huddle EP 19 | Measuring GitHub Copilot's Downstream Impact with DORA | Opsera

Opsera
28 Jun 202430:22

Summary

TLDRIn this episode of the DevOps Huddle, the discussion focuses on measuring the impact of GitHub Copilot and its effect on downstream processes using DORA metrics. The webinar explores the challenges of aggregating data across tools and the importance of these metrics for understanding software delivery performance. It introduces a solution that connects GitHub Copilot usage with DORA metrics, offering insights into developer productivity and efficiency. The conversation also includes a 14-day free trial offer for a dashboard that integrates GitHub Copilot and DORA metrics.

Takeaways

  • 😀 The webinar is part of a three-part series focusing on unified insights capabilities, particularly the impact of using GitHub Copilot and how it can be measured with Dora.
  • 🔧 Part one of the series covered GitHub Copilot's developer activity, actual usage, and licensing, while part two discusses the downstream impact after code commits.
  • 👋 Introductions of the panelists, Ed, a sales engineer with a background in devops and Gilbert, VP of Post Services with experience in building devops teams.
  • 📊 A poll was conducted to gauge the audience's use of GitHub Copilot and GitHub, showing an even split between those who are and aren't using GitHub Copilot.
  • 📅 An upcoming third part of the series is announced for July 25th, focusing on uniting GitHub Copilot with developer experience and security posture.
  • 🔬 Dora is introduced as a 10-year research and assessment program run by Google, aimed at understanding capabilities and processes that drive higher delivery performance.
  • 📈 Dora focuses on four core metrics: Lead Time for Change, Deployment Frequency, Change Failure Rate, and Mean Time to Resolution (MTTR).
  • 🛠 Ed explains the importance of not creating your own metrics but leveraging the established Dora metrics for measuring software delivery performance.
  • 🔀 Gilbert discusses the challenges of setting up Dora metrics, including data aggregation across tools, time pressure, risk management, and updates/maintenance.
  • 🔄 The webinar highlights the importance of distinguishing between inner loop (developer activity) and outer loop (system metrics) when considering metrics for devops and development teams.
  • 🔗 The final part of the webinar demonstrates how GitHub Copilot usage can be associated with Dora metrics to show the impact on organizational software delivery performance.

Q & A

  • What is the main focus of the 'devops Huddle, episode 19' webinar?

    -The webinar focuses on the unified insights capabilities, particularly the new GitHub Copilot measuring capabilities, and its downstream impact on software development processes.

  • What is GitHub Copilot and what does it aim to improve?

    -GitHub Copilot is an AI programming assistance tool that aims to improve developer productivity and efficiency by providing code suggestions and automating certain coding tasks.

  • What does part one of the webinar series cover?

    -Part one of the series is about understanding GitHub Copilot, measuring developer activity, actual usage, and licensing to determine how much of the license is being utilized.

  • Who are the panelists introduced in the webinar, and what are their backgrounds?

    -The panelists are Ed, a sales engineer at Opsera who has a background in development and devops, and Gilbert, VP of Post Services at Opsera, who has experience in building devops teams and processes.

  • What is Dora, and what does it stand for?

    -Dora stands for the DevOps Research and Assessment program, a 10-year-long research initiative run by Google to understand what capabilities, technologies, and processes drive higher delivery performance in software development.

  • What are the 'Dora Core 4' metrics that Ed discusses in the webinar?

    -The 'Dora Core 4' metrics are lead time for change, deployment frequency, change failure rate, and mean time to resolution (MTTR), which are key performance indicators for measuring software delivery and organizational performance.

  • Why is aggregating data across different tools a challenge when implementing Dora metrics?

    -Aggregating data is challenging because it involves collecting data from various tools used across different teams, which may have different combinations of tools and processes, and then ensuring the data is consistent and valid for accurate Dora metric calculations.

  • What is the significance of the 14-day free trial mentioned in the webinar?

    -The 14-day free trial allows participants to connect their existing tools to the Opsera platform, get started with measuring GitHub Copilot usage and Dora metrics, and evaluate the benefits without any initial commitment.

  • How can GitHub Copilot's impact on an organization be measured?

    -The impact of GitHub Copilot can be measured by associating its usage data with Dora metrics, which provide insights into software delivery performance and help demonstrate the return on investment for using GitHub Copilot.

  • What is the purpose of the third part of the webinar series scheduled for July 25th?

    -The third part of the series will focus on uniting GitHub Copilot with developer experience and ensuring that the security posture of the organization remains safe, while also exploring the satisfaction of developers with their work and the impact on business security.

  • What are the inner loop and outer loop in the context of software development metrics?

    -The inner loop refers to developer-centric metrics focusing on activity and efficiency, such as coding and meeting productivity. The outer loop, or Dora metrics, refers to system productivity metrics that measure the performance of the entire software delivery process, such as lead time for changes and deployment frequency.

Outlines

00:00

😀 Introduction to the DevOps Huddle and GitHub Co-Pilot Discussion

The script opens with a warm welcome to the DevOps Huddle, episode 19, which is the second installment of a three-part series focused on unified insights and GitHub Co-Pilot's impact on development. The host briefly summarizes the content of part one and outlines the agenda for part two, which revolves around measuring the downstream effects of using GitHub Co-Pilot. The host then introduces the panelists, Ed and Gilbert, who share their professional backgrounds in devops and development. A poll is conducted to gauge the audience's familiarity with GitHub and GitHub Co-Pilot, revealing a balanced split. Finally, the host teases part three of the series, which will address developer experience and security in the context of GitHub Co-Pilot.

05:01

📊 Delving into Dora and Its Core Metrics for Software Delivery Performance

This paragraph introduces Dora, a decade-long research initiative by Google, aimed at identifying the key capabilities, technologies, and processes that enhance delivery performance. The Dora research has culminated in four core metrics known as the 'Dora Core 4', which are critical for assessing software delivery and organizational performance. Ed explains these metrics: lead time for change, deployment frequency, change failure rate, and mean time to resolution (MTTR). The metrics serve as a benchmark for managers to improve software delivery within their organizations. The speaker also mentions the challenges of implementing these metrics, such as aggregating data across various tools and the pressure to deliver quick results without disrupting developers' schedules.

10:02

🔧 The Challenges and Solutions in Implementing Dora Metrics

The speaker acknowledges the difficulties in setting up Dora metrics, such as aggregating data from multiple tools and the pressure for immediate results. They also discuss the risks involved in creating custom metrics and the maintenance challenges that arise. The paragraph emphasizes the importance of using a platform that can aggregate and transform data across different tools to provide valid Dora metrics. The speaker mentions that their organization, Obser, has sponsored the DevOps state of DevOps report and encourages the audience to access the latest report for more insights.

15:03

🤔 Exploring the Inner Loop and Outer Loop Metrics in Software Development

Gilbert distinguishes between the inner loop, which pertains to developer activity and efficiency metrics, and the outer loop, which involves system productivity metrics like those identified by Dora. He explains that while developers are participants in the outer loop from a downstream perspective, their primary focus is on coding, which is better measured by activity and efficiency metrics. Gilbert also discusses the importance of continuous delivery models and how Dora metrics can help organizations transition from batch releases to a more agile approach. He stresses the need for educating teams about Dora metrics to foster a culture of productivity and efficiency.

20:05

🔗 Connecting GitHub Co-Pilot Usage with Dora Metrics for Organizational Benefits

The script addresses the significance of correlating GitHub Co-Pilot usage with Dora metrics to demonstrate the tool's impact on organizational performance. While GitHub Co-Pilot provides an API for usage data, it lacks the granularity required to directly link with Dora metrics. The speaker introduces their platform's capability to aggregate and transform data from various tools, including GitHub Co-Pilot usage, to generate meaningful Dora metrics. This holistic approach allows for a more accurate assessment of the benefits of using GitHub Co-Pilot in terms of software delivery performance.

25:07

🚀 Getting Started with Dora and GitHub Co-Pilot: A 14-Day Free Trial Offer

The host offers a 14-day free trial for participants to start using their platform to measure GitHub Co-Pilot usage and Dora metrics. They explain that the trial allows users to connect their existing tools and immediately begin assessing their use of GitHub Co-Pilot and its impact on software delivery performance. The host encourages the audience to take advantage of this no-risk opportunity to gain insights into their development processes and to prepare for the upcoming third part of the series, which will focus on developer experience and security.

Mindmap

Keywords

💡DevOps Huddle

DevOps Huddle is the title of the webinar series being discussed in the script. It is a platform where topics related to software development and operations are explored. In the context of this video, it is specifically focused on the impact of GitHub Co-pilot and how it can be measured using Dora metrics. The series is structured in multiple parts, with the script referring to episode 19 as part two of a three-part series.

💡GitHub Co-pilot

GitHub Co-pilot is an AI-powered programming assistant that helps developers write code more efficiently. In the script, it is highlighted as a new technology that organizations are exploring to improve developer productivity. The discussion revolves around measuring the impact of GitHub Co-pilot on developer activity and its downstream effects, which is a central theme of the video.

💡Unified Insights

Unified Insights refers to the comprehensive understanding of various metrics and capabilities that organizations can gain to improve their software development processes. In the video, it is particularly associated with measuring the effectiveness of GitHub Co-pilot and understanding its impact on development teams, which is a key part of the discussion in the second part of the series.

💡Dora

Dora, or the DevOps Research and Assessment program, is a research initiative by Google that focuses on understanding what drives higher delivery performance in technology teams. The script discusses Dora as a framework for measuring software delivery performance, with its core metrics being key to understanding the impact of tools like GitHub Co-pilot on organizational performance.

💡Lead Time for Change

Lead Time for Change is one of the core metrics in the Dora framework, measuring the time it takes to go from idea to delivered value. In the script, it is emphasized as a critical indicator of how quickly an organization can deliver software, which is directly related to the discussion on how GitHub Co-pilot might affect this metric.

💡Deployment Frequency

Deployment Frequency is another key metric in the Dora framework, indicating how often an organization deploys changes to its software. The script uses this metric to discuss the throughput of software delivery, showing how often stories are translated into value, which is a measure of the organization's delivery speed.

💡Change Failure Rate

Change Failure Rate is a metric that measures the frequency of incidents introduced by changes or deployments. In the script, it is discussed as a constraining factor that organizations need to manage, aiming for a low rate to ensure stability while avoiding the introduction of excessive process rigor that could slow down delivery.

💡MTTR (Mean Time to Resolution)

MTTR, or Mean Time to Resolution, measures the time it takes to restore service after an incident. The script highlights MTTR as an important characteristic of a system's stability, showing how quickly an organization can identify and resolve issues, which is crucial for maintaining a healthy software delivery process.

💡Inner Loop

The Inner Loop refers to the developer-centric metrics and activities, focusing on individual developer productivity. In the script, it is contrasted with the Outer Loop, which involves system-level metrics like those measured by Dora. The Inner Loop includes metrics like ticket closures and code commits, which are part of the developer's daily activities.

💡Outer Loop

The Outer Loop refers to the system-level metrics that measure the overall performance of an organization's software delivery process. In the script, it is associated with Dora metrics like Lead Time for Change and MTTR. The Outer Loop is crucial for understanding the broader impact of development activities on the organization's delivery capabilities.

💡DevX

DevX is a new framework mentioned in the script, which is part of the broader discussion on different metrics frameworks in the industry. While not detailed extensively in the script, DevX is presented as an alternative or additional framework to Dora, indicating a shift towards more developer-centric metrics and experiences.

Highlights

Introduction to a three-part series focusing on unified insights and GitHub Co-pilot's capabilities and its downstream impact.

GitHub Co-pilot's role in measuring developer activity and usage, along with licensing details.

The importance of understanding what happens downstream after code commits and its measurement with DORA.

Introduction of panelists Ed and Gilbert, their experience in devops and contribution to the discussion.

A poll to gauge the audience's familiarity with GitHub Co-pilot and GitHub's usage in their organizations.

The upcoming third part of the series focusing on developer experience and security posture with GitHub Co-pilot.

Overview of DORA, a 10-year research program by Google, aimed at understanding capabilities for higher delivery performance.

Explanation of DORA's Core 4 metrics: Lead Time for Change, Deployment Frequency, Change Failure Rate, and MTTR.

The practical use of DORA metrics for managers to improve software delivery within their organizations.

Challenges in setting up DORA metrics, including data aggregation across different tools and the pressure for quick results.

The concept of inner loop and outer loop metrics, distinguishing between developer activity and system productivity.

GitHub Co-pilot's API and its limitations in providing data for DORA metrics.

How the devops platform aggregates and transforms data to provide valid DORA metrics despite tool variations.

Demonstration of associating GitHub Co-pilot usage with DORA metrics to show the tool's impact on organizational performance.

A 14-day free trial offer for the devops platform to get started with GitHub Co-pilot and DORA metrics.

Encouragement for the audience to join the next webinar for further exploration of developer experience and GitHub Co-pilot's impact.

Final thanks and sign-off from the hosts and panelists, highlighting the value of the insights shared during the webinar.

Transcripts

play00:04

[Music]

play00:12

welcome everybody to the devops Huddle

play00:17

episode 19 now this is part two of a

play00:20

three-part series that we've been

play00:21

running uh about our unified insights

play00:24

capabilities and more specifically about

play00:26

our new uh GitHub co-pilot measuring

play00:29

capabil ities um being able to measure

play00:32

GitHub co-pilot as well as the

play00:34

downstream impact of what happens when

play00:36

you use GitHub co-pilot so if you missed

play00:39

part one part one was all about GitHub

play00:41

co-pilot uh what you're measuring

play00:43

developer activity um actual usage and

play00:47

then also um you know licensing how much

play00:50

of your license are being used all those

play00:52

great capabilities we have now this is

play00:54

part two where we're going to take what

play00:56

we learned in part one and apply it to

play00:59

what happens

play01:00

after somebody commits their code right

play01:02

what happens Downstream and being able

play01:04

to measure that impact with Dora now uh

play01:08

if you're not familiar with Dora it's

play01:09

not a big deal we'll cover it later in

play01:11

this webinar we'll cover all about it so

play01:14

you'll be fully informed um but let's

play01:17

start with a nice little

play01:18

introduction so I'd like to say hello to

play01:21

my panelists today Ed and Gilbert uh so

play01:24

I'd love to for them to introduce

play01:26

themselves Ed why don't you go first

play01:28

sure I'm Ed SL I'm a sales engineer here

play01:30

at opsa I spent most of my career as a

play01:33

developer up until about maybe seven

play01:35

years ago when I got into devops uh I

play01:38

worked at gitlab for four years I had

play01:40

four roles in customer success over

play01:42

those four years and I've been at obser

play01:44

for the past 14 months happy to be here

play01:47

awesome so glad to have you Ed and I'd

play01:49

also like to introduce Gilbert Gilbert

play01:51

why don't you introduce yourself hey

play01:53

guys um I'm a VP of post Services here

play01:57

at upsera um before s i was a VP of

play02:01

devops and Cloud operations building

play02:04

devops teams and building uh devops you

play02:07

know uh processes and infrastructure so

play02:11

so happy to be here thanks for having me

play02:14

great and and great to have two people

play02:15

who understand what it's like to both

play02:18

run and lead and be uh contributing

play02:20

members of development teams and also

play02:24

understand what it means to um improve

play02:26

the performance of those teams so I I

play02:29

think you guys are great panelists and

play02:31

uh we're going to get started here but

play02:33

first we're going to talk a little bit

play02:35

we're going to do a little poll so uh

play02:38

I'm gonna invite um adid to run a poll

play02:41

for us and uh why don't you go for it

play02:45

thank you just

play02:46

gonna ask you guys just a couple of

play02:50

questions um you should be able to see

play02:53

them is your organization currently

play02:55

using GitHub co-pilot and is your

play02:58

organization currently using GitHub

play03:02

yeah and to clarify right if you're

play03:03

using GitHub co-pilot that would be for

play03:05

AI PA programming assistance but if

play03:07

you're using GitHub that would be for

play03:09

you know source code management or

play03:11

something like GitHub

play03:13

actions all right another second for

play03:17

everyone to answer and it looks like

play03:20

we've got a pretty even split between

play03:22

yes and no for everyone using uh GitHub

play03:26

co-pilot and for GitHub GitHub everyone

play03:29

is

play03:31

using and that makes sense to

play03:33

me um if we think about it right GitHub

play03:37

co-pilot is pretty new to the market but

play03:39

everybody at least heard about it maybe

play03:40

they're not using it yet maybe you're

play03:42

not uh you know totally on board with it

play03:44

yet but GitHub is a leader in in um

play03:47

source code management so it makes sense

play03:49

for everybody to be quite on board with

play03:51

using GitHub um so great it's it's nice

play03:54

to know sort of where everybody stands

play03:57

uh before we get rolling with this

play03:58

webinar so so like I said this is part

play04:02

two of a three-part series and so in

play04:04

order for us to sort of close the loop

play04:07

on what it means to use GitHub copilot

play04:09

and what it means for your business we

play04:11

have a part three coming up in July it

play04:14

will be July 25th at 11:00 a.m. Pacific

play04:17

and it will be all about uniting GitHub

play04:20

co-pilot with developer experience and

play04:23

making sure your security posture is

play04:26

safe so maybe you you learned yes you're

play04:29

using GitHub Hub co-pilot to improve

play04:30

developer productivity efficiency um you

play04:34

know those those commits are being

play04:35

accepted into production what does that

play04:38

mean for the developer are developers

play04:40

actually achieving you know greater

play04:42

satisfaction with their work are they

play04:44

doing things better are they feeling

play04:46

positively about how GitHub co-pilot is

play04:49

helping them and also is your business

play04:52

taking a hit to security by using this

play04:54

new technology and how do you know how

play04:56

do you measure it so it's all part three

play04:59

and we love for you to join us for that

play05:00

as well definitely go ahead and scan

play05:02

this QR code for a little landing page

play05:04

that'll give you um a place to sign up

play05:06

we'll also flash this at the end so that

play05:09

you have another chance to sign up if

play05:10

you miss it

play05:12

now okay so great now we get into the

play05:15

meat of this business today um so for

play05:18

everybody who maybe has heard about

play05:20

GitHub co-pilot or is using it but maybe

play05:22

you're not really familiar with Dora and

play05:24

why we're on the call today we're going

play05:26

to give you a little introduction to

play05:27

what is Dora so Dora is a 10 year long

play05:31

uh research and Assessment program the

play05:33

devops research and Assessment program

play05:35

run by Google so they've been running it

play05:37

a long time uh it's bunch of really

play05:40

great people who are really interested

play05:42

in understanding what capabilities what

play05:45

technologies and what processes actually

play05:48

drive higher delivery performance right

play05:51

they're really interested in boiling

play05:54

down all of the different things that

play05:57

actually lead to better organizational

play05:59

performance

play06:00

they have spent years really

play06:02

understanding from code to

play06:04

commit what do you need to be successful

play06:08

as a business as a team as a personal

play06:10

developer and so they they've they've

play06:13

interviewed and surveyed thousands of

play06:16

developers and thousands of

play06:18

organizations across these 10 years and

play06:21

have uh issued really awesome

play06:23

information over this over the course of

play06:25

this time um obser was fortunate to be

play06:28

able to be a uh sponsor of the

play06:31

2023 um devops state of state of devops

play06:35

report Dora state of devops report um so

play06:37

you can definitely get that at this link

play06:39

uh but we're also really pleased to be a

play06:42

sponsor of the 2024 upcoming report so

play06:45

if you sign up for this report from 2023

play06:48

which you can download for free now we

play06:50

will also uh inform you when the 2024

play06:52

report comes out because that'll be the

play06:54

newest and greatest information really

play06:56

excited to be able to have been sponsors

play06:58

of last year and sponsors of this year

play07:00

as well um so yes awesome info um and

play07:03

and I'm going to hand it over to Ed now

play07:05

who's going to take you into the Dora

play07:07

core 4 that is what we're really going

play07:09

to focus on today which are the core

play07:11

four metrics that Dora focuses on take

play07:14

it away

play07:15

Ed okay thank you okay so like Anna said

play07:20

Google put a lot of effort and Research

play07:22

into high performing technology teams

play07:24

they started 10 years ago they spent a

play07:26

lot of resources on this and what

play07:28

trickled out from all of that that

play07:29

research is these big four kpis so after

play07:33

all was said and done they said hey if

play07:35

you measure these four things you're

play07:37

going to have your arms around what's

play07:39

happening with respect to software

play07:40

delivery and your

play07:42

organization so we'll walk through these

play07:44

quick um let's say I'd like to start

play07:46

with lead time for change the bottom

play07:48

left here this is how long it takes to

play07:50

go from idea to delivered value to your

play07:53

customers whatever that means for your

play07:55

organization so you want to be able to

play07:57

do this quickly this is how you you hear

play07:59

about you know deliver software better

play08:01

faster this is the faster piece of that

play08:04

um above that is deployment frequency

play08:07

how fast can you run that play you know

play08:09

how often are you getting these stories

play08:11

that you're able to translate into value

play08:14

so these two together come together to

play08:16

give you total delivery speed this is

play08:19

your throughput this is how fast your

play08:20

organization is able to go uh now on the

play08:24

right we kind of have the constraining

play08:25

pieces so on the bottom right change

play08:28

failure rate how often when you

play08:30

introduce a change or when you deployed

play08:32

a prod did you introduce trouble or some

play08:35

incident that has to be resolved so

play08:38

that's your change failure rate you want

play08:39

that to be low you want it to Trend

play08:41

lower but you would never strive to make

play08:44

that zero because to do that your lead

play08:47

time for change would have to go very

play08:49

high you'd have to put so much rigor in

play08:51

testing and approvals into your process

play08:54

that basically your throughput would

play08:56

grind to a Hal so you want your change

play08:58

failure rate to beow low but um you

play09:00

would never try to make it to go to zero

play09:02

there's going to be problems and that

play09:03

gets us into the fourth uh metric which

play09:06

is mttr mean time to resolution or time

play09:08

to restore so when something bad happens

play09:11

and Things become unstable how long does

play09:13

it take you to identify that problem and

play09:16

back it out you know get back to steady

play09:18

state get back to things working a very

play09:20

important characteristic for a system so

play09:24

really really quick um we see Dora

play09:27

coming in and being useful for two main

play09:29

reasons one is maybe I'm a manager and

play09:33

I'm in charge of making software

play09:35

delivery better for my

play09:36

organization the first thing I have to

play09:38

figure out is what does that even mean

play09:41

right you know how how am I going to

play09:42

represent that at the end of the quarter

play09:45

I'm going to put some things in place

play09:46

I'm going to figure some things out but

play09:48

at the end of the quarter I'm going to

play09:49

want to show hey this is software

play09:51

delivery before and this is software

play09:53

delivery after all this great stuff that

play09:55

I did what am I going to show in those

play09:57

two slides well the answer to that

play09:59

question is Dora you know a lot of

play10:01

organizations and a lot of people try to

play10:04

try to go roll their own and they figure

play10:05

it out and I'm going to measure commits

play10:07

hit in the server and I'm going to

play10:08

measure this or that but what what came

play10:10

out of all of the resources that Google

play10:12

put into this research is these four

play10:14

kpis the answer to that question is

play10:17

these four kpis no need to to roll your

play10:20

own um one of the uh the stories I like

play10:23

to tell here is I used to be when I was

play10:25

a developer I used to be very proud of

play10:28

my ability to a automate testing you

play10:30

know whatever the the the the gnarly

play10:32

problem was or application was I would

play10:34

figure out a way to go in there and

play10:36

build out some tests that can be run as

play10:37

part of the pipeline and then you know

play10:39

you can't get back into main or you

play10:41

can't get back into Dev until you pass

play10:42

this this test and I I thought that was

play10:44

fantastic and it and it was easy to sell

play10:46

to my management but what ended up

play10:49

happening is those tests sometimes would

play10:51

be so complicated and combersome that I

play10:54

was the only person that could maintain

play10:56

them and all of a sudden I introduced uh

play10:59

bottleneck into our process so lead time

play11:01

for change would go down actually so um

play11:05

and and the the other problem there was

play11:07

that the my colleagues had trouble

play11:10

making that point if you don't have Dora

play11:12

they had trouble arguing against these

play11:14

tests that I was saying we're so great

play11:16

but if you have Dura and you trust that

play11:18

Dura you can say all right all right Ed

play11:21

let's let's watch um change failure rate

play11:24

and let's see if it changes let's back

play11:26

out that test and replace that big

play11:27

gnarly test with these three little

play11:29

simple test and see how change failure

play11:32

rate is affected and if it's not

play11:33

affected poorly let's let's make a

play11:35

decision to cut out that big test so

play11:38

another way that if you have door that

play11:40

you trust this is the way that you can

play11:42

leverage it going

play11:44

forward um let's see uh support slas the

play11:47

other piece I want to show here is you

play11:49

know to show how these things kind of

play11:50

play together is meantime to restore

play11:53

what would happen if I said hey I know

play11:56

we've been running 247 support but I

play11:59

don't think problems really happen in

play12:01

off work hours and um and I think we

play12:03

would be okay just supporting during

play12:05

working hours you know what you can do

play12:08

if you have mttr that you trust you

play12:09

could test that you can run some

play12:12

application with that SLA for a couple

play12:14

of weeks watch your mttr and see if it

play12:17

gets hammered if mttr doesn't change

play12:19

much you might find that 247 support

play12:22

isn't worth the squeeze so another kind

play12:25

of instance where you can use door to

play12:27

make things better uh in your

play12:30

ation okay uh next

play12:35

slide all right so I sold you now on dur

play12:38

dur is fantastic and you want to do it

play12:41

and I explained how simple those metrics

play12:43

are so maybe you want to go create these

play12:46

things yourself you know all I have to

play12:47

do is start the clock here and end it

play12:49

here and and and run this calculation

play12:51

and I have door right so I'll go do this

play12:53

myself there are some challenges with

play12:55

setting this up we see this all the time

play12:57

with organizations they come out of the

play12:59

Gates they're going to do this

play13:00

themselves and and these are some of the

play13:01

things that they hit so the first thing

play13:03

is aggregating data a lot of these kpis

play13:06

they do span tools so you have to you

play13:09

have to harvest the data from the

play13:10

different tools you have to aggregate it

play13:12

across it and make sure things make

play13:14

sense otherwise you start to get dur

play13:16

metrics that aren't valid that's not

play13:18

consistent with what's really happening

play13:20

so that's a challenge and that challenge

play13:21

is exacerbated by the fact that you

play13:23

probably have a couple different

play13:25

combinations of tools even inside your

play13:27

organization across your vertical

play13:29

right this team isn't doing it this way

play13:31

they're using these tools this other

play13:32

team has a whole different concept of

play13:34

what it means to deploy so now you have

play13:37

to aggregate but you have to do it a

play13:38

couple different ways according to these

play13:40

combinations of tools the next piece

play13:43

that comes in here is you've sold doraa

play13:45

to your leadership you know everybody

play13:47

agrees this is a good thing we need to

play13:49

have it and now there's pressure we want

play13:51

it now we don't we don't want to wait

play13:52

till next quarter to get this what what

play13:54

can you show us you know as soon as

play13:56

possible so here's this pressure to

play13:58

produce quickly

play13:59

um but you know don't don't affect my

play14:02

developers you know these developers are

play14:04

working on these other problems and

play14:06

their schedules aren't to be changed you

play14:08

know so there's you know another

play14:09

competing thing that that happen um two

play14:12

more things risk right what what what is

play14:15

there out there what are the unknown

play14:17

unknowns that you're getting yourself

play14:19

into here when you try to break this off

play14:20

for yourself and then finally uh updates

play14:23

and maintenance the idea fery comes out

play14:26

immediately you know it's great that we

play14:28

have Dora and we have these kpis but but

play14:31

can you give me side by-side comparisons

play14:33

between team a and Team B or can you

play14:35

give me side-by-side comparisons between

play14:37

this Sprint and that Sprint last time

play14:40

you know these kind of these kind of

play14:41

requests are going to come trickling in

play14:43

and they will be U they will be

play14:44

problematic so um so I'm going to pause

play14:47

there for a second I mentioned earlier

play14:49

about Downstream pieces associated with

play14:51

lead time for delivery and to talk more

play14:54

about that um we be good

play15:00

all right all right well thanks thanks

play15:02

guys um so what are what are Downstream

play15:04

metrics

play15:06

um I think yeah so so I'm I'm going to

play15:09

take a little bit of a step back and

play15:11

really talk about what what the industry

play15:14

calls you know um metrics right and I'm

play15:17

going to talk about um I think three

play15:18

different Frameworks out there in the

play15:20

industry which could be very confusing

play15:22

to to all of us right like um we've been

play15:25

talking about Dora and how Dora you know

play15:27

has four metrics that Google has led the

play15:30

industry and now has been been a

play15:32

standard you know lead time for changes

play15:35

um you know mttr you know which is now

play15:39

and has a new name called fail

play15:41

deployment recovery time um you have

play15:43

change frequency rate and then you have

play15:45

deployment frequency right so these are

play15:47

all like system productivity uh metrics

play15:50

right so so I just want to kind of um

play15:53

clear up uh the maybe a little bit of

play15:55

the confusion just on looking at

play15:57

Frameworks right there's three different

play15:59

Frameworks which is Dora space and now a

play16:02

new framework called devx right so um it

play16:05

could be super confusing it could be

play16:07

like what do I use why is Dora so

play16:10

important right and um I know we don't

play16:12

have a lot of time because that could be

play16:14

a whole huddle by itself so so I'm

play16:17

really just going to kind of give you

play16:19

examples of what we see out there that

play16:21

works the best for D metrics and then um

play16:24

I'll talk a little bit about the

play16:25

mistifying the inner loop and outer loop

play16:28

and where Dora fits versus where it

play16:30

doesn't fit right so um let's just you

play16:33

know talk about the the inner loop

play16:36

really quick right the inner loop here

play16:39

is all about developer key metrics right

play16:43

and don't confuse these with um with dor

play16:46

metric because you know it's very easy

play16:50

to um try to fit developer productivity

play16:54

into Dora right like kind of like what

play16:56

Ed just mentioned about tying his test

play16:59

um and then downstreaming to you know

play17:02

the devops um what I call the outer loop

play17:05

which is all the system metrics right

play17:07

when it comes to key measurements on the

play17:09

developer side the developers really

play17:12

don't um necessarily fit in that I'll

play17:15

call it you know Dora metric space they

play17:18

they're participants of that from a

play17:20

downstream perspective but they you know

play17:23

developers really want to code right so

play17:27

you really have to think of developer

play17:28

product activity as um you know two

play17:31

buckets I I put them in into activity

play17:33

metrics and efficiency metrics activity

play17:36

metrics are things like hey closing uh

play17:39

tickets closing pool requests you know

play17:43

um deployments um shipping code and and

play17:46

an opportunity to to make those flows

play17:49

right where uh where efficiency from a

play17:53

from an inner loop right and efficiency

play17:55

becomes to like do does my engineer feel

play17:59

productive um is he more productive with

play18:02

having less meetings is there a no

play18:04

meeting day is there um a you know two

play18:07

hours of focus times of uninterrupted

play18:10

you know um time that the developer can

play18:12

code right it's not about how many times

play18:15

he committed the code or or how many

play18:17

pool requests he created those are a

play18:19

little bit more activity metrics but

play18:20

those that's that's what I call the

play18:22

inner loop right so let's now look at

play18:25

the outer loop which which I think this

play18:27

is what the session is about it's about

play18:29

dorom metrics and what what we see in

play18:32

the industry and what we see the

play18:34

organizations be very um I'll call it uh

play18:37

devops trans transformative and

play18:40

very um very efficient I'll call it or

play18:43

very effective I will is when you take

play18:46

the context of moving you know the

play18:49

industry has been moving from I'll call

play18:50

it a batch releases to a continuous

play18:52

delivery model right that's where Dora

play18:55

comes in where you can start actually

play18:57

seeing the delivery and how fast of the

play19:01

delivery you're doing from moving from a

play19:03

batch or call it a monolith application

play19:05

to a microservices application and then

play19:08

how fast did that needle move again Dora

play19:11

metrics are also an education right it's

play19:14

a it's a muscle memory like you you also

play19:17

have to educate your community about

play19:19

Dora metrics you know um because it's

play19:22

not going to be day one but it's really

play19:24

up to your management team to really

play19:27

clearly identify

play19:29

what are your you know uh what does

play19:32

productivity mean within your

play19:33

organization right so you're able to

play19:36

understand the metric and then bubble

play19:37

that up to the executive team and

play19:39

Leadership so let me let me pause there

play19:42

um and I'll you know again I'll I'll

play19:45

just pause there and and let you know

play19:47

give it back to

play19:50

ad okay thank you

play19:52

Gilbert and I'm going to go ahead and

play19:54

share my screen

play20:00

all right so we we said earlier in the

play20:02

agenda that we're why GitHub co-pilot

play20:05

and dur this is part of a co-pilot

play20:07

series why are we talking about dur now

play20:10

and this is something I'm actually

play20:11

really excited about so um GitHub

play20:14

co-pilot came out and it's definitely

play20:16

promising productivity and and things

play20:18

that the organizations care about it's

play20:20

wildly successful and it's being Ed like

play20:23

crazy right now but really what does it

play20:25

do for the organization you really want

play20:27

to be able to show a return on that on

play20:31

that investment um so so what what can

play20:34

we measure to get that done that's where

play20:36

door comes in we don't want to we don't

play20:38

want to start now trying to create our

play20:40

own metrics to prove the return on

play20:43

copilot we don't have to Google already

play20:46

made that effort we can just leverage

play20:48

that uh that investment so that's where

play20:50

co-pilot plus Dura comes in and it's

play20:53

something I'm really excited about uh

play20:55

let's see

play20:57

here so but what data is available and

play21:00

this is kind of where the rub is so

play21:02

co-pilot uh gith help co-pilot came out

play21:05

recently with an API that provides usage

play21:07

data but that usage data is is kind of

play21:10

at a level it's at that inner loop level

play21:12

and it's not necessarily something that

play21:14

the organization uh sees and cares about

play21:17

immediately you know when you talk about

play21:19

return if you want to pitch or if you

play21:20

want to show your leadership that

play21:22

co-pilot is working out if you talk

play21:24

about you know hit rates as far as

play21:27

suggestions that are accepted that's not

play21:29

going to resonate with them you really

play21:31

want to get down to the door level so

play21:33

you at that point you want to be able to

play21:35

associate the co-pilot usage with the

play21:39

dur metrics but that door that data that

play21:42

comes out of GitHub isn't at that level

play21:44

it doesn't give you down to the user or

play21:46

even the projects so drawing that

play21:49

correlation can be

play21:50

challenging so with that I I do want to

play21:52

get in and show you our solution to

play21:54

these problems and we'll kind of show

play21:56

what what we're doing on that front

play21:59

to start the demo here just quickly I

play22:01

want to show you I want to set this

play22:02

Foundation we are devops platform we do

play22:05

a lot of different things but right now

play22:07

we're looking at our tool registry so on

play22:10

as an onboarding function you come in

play22:12

here to our platform and you start to

play22:13

register the tools that you're already

play22:15

using and many of those tools publish

play22:18

metrics and when they do we Harvest

play22:20

those metrics on your behalf so that we

play22:23

can do things and we can aggregate that

play22:25

data and make insights from that data

play22:27

including the co-pilot uh usage data so

play22:31

um and Dora so we mentioned early that

play22:33

Dora spans these different tools because

play22:36

we have this registry of all these tools

play22:39

we're able to aggregate that data

play22:41

transform that data and give you valid

play22:44

uh door metric so I mentioned one of the

play22:46

challenges is that aggregation it's also

play22:48

all those different tool combinations we

play22:50

have eight patents approved nine patents

play22:52

pending we some of those patents do

play22:54

apply to our data transformation and

play22:56

aggregation and that results in our

play22:58

ability to do D in a valid way against

play23:01

many different combinations of tools so

play23:04

we're looking at our Dora dashboard

play23:05

right here um we can focus this on

play23:08

particular organizations and things like

play23:10

that we have deployment frequency at the

play23:12

top left change failure rate underneath

play23:15

it lead time for changes and meantime to

play23:17

resolve uh I'll show you we'll dig into

play23:20

we can click into any of these the first

play23:21

thing I'll show is change failure rate

play23:23

if you click in one of these kpis we

play23:25

start to give you a graphical

play23:27

representation of the data we also give

play23:29

you a table of all the data points that

play23:31

got rolled into that representation and

play23:33

into that um that top number for that

play23:35

kpi so we give you all of that there's a

play23:37

lot of different ways you can

play23:38

investigate the data um but I just

play23:40

wanted to point that out um the next

play23:43

piece I want to show is taking it back

play23:45

to co-pilot so here is our co-pilot

play23:48

dashboard and this is where we're taking

play23:50

that usage data um from the API and

play23:53

we're we're showing it to you here which

play23:55

is very useful you can see adoption rate

play23:57

how many of the users that have access

play23:59

to Door are using it we can see

play24:01

acceptance rate and quality suggestions

play24:03

this is you know when when copilot

play24:05

returns something are they tab

play24:07

completing and accepting that suggestion

play24:09

or they typing over it and rejecting

play24:11

these suggestions so that's all here

play24:14

useful stuff but it's not how healthy

play24:17

your organization is with respect to

play24:19

software delivery that's doraa this is

play24:21

kind of like is the model working are

play24:23

your people using it I I kind of think

play24:25

of it as a tachometer in a car and maybe

play24:28

temperature gauges here important but

play24:31

it's not what you want to report out so

play24:33

what you really want to do is you want

play24:34

to associate your co-pilot usage with

play24:38

your dur metrics and unfortunately these

play24:42

um the the API published by GitHub

play24:45

doesn't give you that granularity

play24:47

doesn't give you that resolution but

play24:49

because we are a platform and we have

play24:51

hooks into all these different tools we

play24:53

have this holistic approach about what's

play24:55

happening in your organization we're

play24:57

able to sus out which commits are are

play25:01

are done by um with use of co-pilot

play25:05

versus which aren't and once we can

play25:07

start to associate commits with co-pilot

play25:10

usage all of a sudden we can trickle

play25:12

that information into these different

play25:15

kpis that are important to us so for

play25:17

instance lead time for changes does have

play25:19

a dependence on commits when commits are

play25:22

hitting the server and because of that

play25:24

now we can start to add elements to

play25:26

these interfaces like I'm showing sh

play25:28

here when I uh tab over this icon I'm

play25:31

showing this is all of the um all of the

play25:33

lead time for this organization right

play25:36

but if I come over here because it does

play25:38

have a dependence on commits and because

play25:40

we can do that association with commits

play25:42

to co-pilot usage now I have a toggle

play25:44

and I can say give me this durametric

play25:47

give me lead time for changes just for

play25:49

co-pilot uses or users right and

play25:52

similarly or inversely give me lead time

play25:54

for changes for non- co-pilot users so

play25:57

in this way we can connect co-pilot to

play26:00

actual uh actual benefits to the

play26:03

organization that the leadership

play26:04

recognizes which is

play26:07

D so let me let me stop here um are

play26:10

there any questions coming

play26:15

in there is one question I see here um

play26:21

someone's just wondering where do you

play26:23

start or how can you get

play26:26

started I I How about if I start that

play26:28

one um so where to get started basically

play26:32

where you're at right so you definitely

play26:34

want to start to get metrics on the

play26:35

tools that you're using it doesn't take

play26:37

many tools to start to get some kind of

play26:40

door out um so start where you're at

play26:43

start collecting metrics and um Anna can

play26:45

you I think we have a program going

play26:48

where we can help start to put that

play26:54

together absolutely I'm excited to share

play26:56

this with you so um in addition to a uh

play27:03

the GitHub co-pilots dashboard that um

play27:06

Ed shared with you we also have a Dora

play27:10

dashboard so the what what Ed shared

play27:13

with you we have a 14-day free trial

play27:15

just for you to get started and and see

play27:17

it so um if you sign up here at this uh

play27:21

QR code you'll bring four of your tools

play27:24

you'll connect them in under an hour

play27:25

we'll connect them for you in under an

play27:26

hour and you'll be able to see you know

play27:29

your GitHub co-pilot metrics you'll see

play27:32

what we showed you and then you'll be

play27:35

able to uh understand how it's being

play27:37

used um who's using the licensing all

play27:41

that good stuff and then as part of our

play27:43

GitHub insights bundle we have the Dora

play27:45

dashboard so it's a really really easy

play27:48

way for you to just get started with a

play27:50

tool that is right out of the box um you

play27:53

know there's there's obviously for like

play27:57

uh Gilbert said earlier for a cultural

play27:59

shift you have to decide what does um

play28:02

what does success look like for you

play28:04

Dora's done all that work right they've

play28:06

done 10 years worth of research to

play28:09

decide what is successful why don't you

play28:11

just go out of the box and try it right

play28:13

so that's what we're trying to do here

play28:14

for you is make it an easy way for you

play28:16

to bring the tools that you're already

play28:17

using bring the teams to the table if

play28:20

you've got a 100 developers or more you

play28:22

can get a free trial today so totally

play28:25

recommend um adid put the link in the

play28:27

chat as well if if you need to use it um

play28:30

but yeah this is this is an easy way to

play28:31

get started and and we're we're really

play28:33

excited to be able to bring it to you

play28:35

one of the only ones on the market for

play28:36

you so you might no risk way of just

play28:39

getting started right now um so yeah I I

play28:42

wanted to say very much thank you to Ed

play28:45

and Gilbert for your time today on this

play28:47

webinar because you brought a lot of

play28:49

really good information to us I know it

play28:51

can be sort of overwhelming especially

play28:53

that innerloop and outerloop activity

play28:56

information and so I I will will

play28:58

recommend again that you join us uh for

play29:01

the next episode next month where we'll

play29:04

explore more of that developer

play29:06

experience um idea we'll we'll dive

play29:09

further into what it means to take

play29:11

GitHub co-pilot to take Dora to take you

play29:14

know your security posture how is it

play29:16

actually impacting your developer

play29:18

performance and what they're you know

play29:19

enjoying about their jobs so highly

play29:22

recommend you join us for that as well

play29:24

um I've put the the QR code here for you

play29:26

for a quick um quick way to sign up and

play29:30

then I will also uh Flash the 14-day

play29:32

free trial again for the remainder of

play29:34

the webinar but I wanted to say thanks

play29:36

everyone for attending thank you to Ed

play29:38

and Gilbert uh for for being our

play29:40

panelist today and thank you to adid for

play29:42

monitoring the

play29:44

chat yeah thank you guys thank you for

play29:46

having us and we look forward to uh you

play29:48

know hearing uh great more questions and

play29:51

looking forward to how you how get how

play29:54

you use GitHub copilot and how has it

play29:56

been with you with your experience

play29:58

absolutely can't wait all right thank

play30:00

you thanks everybody thanks all bye guys

play30:03

thank you

Rate This

5.0 / 5 (0 votes)

関連タグ
GitHub Co-PilotDora MetricsDevOpsSoftware DeliveryDeveloper ProductivityPerformance MeasurementContinuous IntegrationAI AssistanceDevOps InsightsTech Webinar
英語で要約が必要ですか?