DevOps Huddle EP 19 | Measuring GitHub Copilot's Downstream Impact with DORA | Opsera
Summary
TLDRIn this episode of the DevOps Huddle, the discussion focuses on measuring the impact of GitHub Copilot and its effect on downstream processes using DORA metrics. The webinar explores the challenges of aggregating data across tools and the importance of these metrics for understanding software delivery performance. It introduces a solution that connects GitHub Copilot usage with DORA metrics, offering insights into developer productivity and efficiency. The conversation also includes a 14-day free trial offer for a dashboard that integrates GitHub Copilot and DORA metrics.
Takeaways
- 😀 The webinar is part of a three-part series focusing on unified insights capabilities, particularly the impact of using GitHub Copilot and how it can be measured with Dora.
- 🔧 Part one of the series covered GitHub Copilot's developer activity, actual usage, and licensing, while part two discusses the downstream impact after code commits.
- 👋 Introductions of the panelists, Ed, a sales engineer with a background in devops and Gilbert, VP of Post Services with experience in building devops teams.
- 📊 A poll was conducted to gauge the audience's use of GitHub Copilot and GitHub, showing an even split between those who are and aren't using GitHub Copilot.
- 📅 An upcoming third part of the series is announced for July 25th, focusing on uniting GitHub Copilot with developer experience and security posture.
- 🔬 Dora is introduced as a 10-year research and assessment program run by Google, aimed at understanding capabilities and processes that drive higher delivery performance.
- 📈 Dora focuses on four core metrics: Lead Time for Change, Deployment Frequency, Change Failure Rate, and Mean Time to Resolution (MTTR).
- 🛠 Ed explains the importance of not creating your own metrics but leveraging the established Dora metrics for measuring software delivery performance.
- 🔀 Gilbert discusses the challenges of setting up Dora metrics, including data aggregation across tools, time pressure, risk management, and updates/maintenance.
- 🔄 The webinar highlights the importance of distinguishing between inner loop (developer activity) and outer loop (system metrics) when considering metrics for devops and development teams.
- 🔗 The final part of the webinar demonstrates how GitHub Copilot usage can be associated with Dora metrics to show the impact on organizational software delivery performance.
Q & A
What is the main focus of the 'devops Huddle, episode 19' webinar?
-The webinar focuses on the unified insights capabilities, particularly the new GitHub Copilot measuring capabilities, and its downstream impact on software development processes.
What is GitHub Copilot and what does it aim to improve?
-GitHub Copilot is an AI programming assistance tool that aims to improve developer productivity and efficiency by providing code suggestions and automating certain coding tasks.
What does part one of the webinar series cover?
-Part one of the series is about understanding GitHub Copilot, measuring developer activity, actual usage, and licensing to determine how much of the license is being utilized.
Who are the panelists introduced in the webinar, and what are their backgrounds?
-The panelists are Ed, a sales engineer at Opsera who has a background in development and devops, and Gilbert, VP of Post Services at Opsera, who has experience in building devops teams and processes.
What is Dora, and what does it stand for?
-Dora stands for the DevOps Research and Assessment program, a 10-year-long research initiative run by Google to understand what capabilities, technologies, and processes drive higher delivery performance in software development.
What are the 'Dora Core 4' metrics that Ed discusses in the webinar?
-The 'Dora Core 4' metrics are lead time for change, deployment frequency, change failure rate, and mean time to resolution (MTTR), which are key performance indicators for measuring software delivery and organizational performance.
Why is aggregating data across different tools a challenge when implementing Dora metrics?
-Aggregating data is challenging because it involves collecting data from various tools used across different teams, which may have different combinations of tools and processes, and then ensuring the data is consistent and valid for accurate Dora metric calculations.
What is the significance of the 14-day free trial mentioned in the webinar?
-The 14-day free trial allows participants to connect their existing tools to the Opsera platform, get started with measuring GitHub Copilot usage and Dora metrics, and evaluate the benefits without any initial commitment.
How can GitHub Copilot's impact on an organization be measured?
-The impact of GitHub Copilot can be measured by associating its usage data with Dora metrics, which provide insights into software delivery performance and help demonstrate the return on investment for using GitHub Copilot.
What is the purpose of the third part of the webinar series scheduled for July 25th?
-The third part of the series will focus on uniting GitHub Copilot with developer experience and ensuring that the security posture of the organization remains safe, while also exploring the satisfaction of developers with their work and the impact on business security.
What are the inner loop and outer loop in the context of software development metrics?
-The inner loop refers to developer-centric metrics focusing on activity and efficiency, such as coding and meeting productivity. The outer loop, or Dora metrics, refers to system productivity metrics that measure the performance of the entire software delivery process, such as lead time for changes and deployment frequency.
Outlines
😀 Introduction to the DevOps Huddle and GitHub Co-Pilot Discussion
The script opens with a warm welcome to the DevOps Huddle, episode 19, which is the second installment of a three-part series focused on unified insights and GitHub Co-Pilot's impact on development. The host briefly summarizes the content of part one and outlines the agenda for part two, which revolves around measuring the downstream effects of using GitHub Co-Pilot. The host then introduces the panelists, Ed and Gilbert, who share their professional backgrounds in devops and development. A poll is conducted to gauge the audience's familiarity with GitHub and GitHub Co-Pilot, revealing a balanced split. Finally, the host teases part three of the series, which will address developer experience and security in the context of GitHub Co-Pilot.
📊 Delving into Dora and Its Core Metrics for Software Delivery Performance
This paragraph introduces Dora, a decade-long research initiative by Google, aimed at identifying the key capabilities, technologies, and processes that enhance delivery performance. The Dora research has culminated in four core metrics known as the 'Dora Core 4', which are critical for assessing software delivery and organizational performance. Ed explains these metrics: lead time for change, deployment frequency, change failure rate, and mean time to resolution (MTTR). The metrics serve as a benchmark for managers to improve software delivery within their organizations. The speaker also mentions the challenges of implementing these metrics, such as aggregating data across various tools and the pressure to deliver quick results without disrupting developers' schedules.
🔧 The Challenges and Solutions in Implementing Dora Metrics
The speaker acknowledges the difficulties in setting up Dora metrics, such as aggregating data from multiple tools and the pressure for immediate results. They also discuss the risks involved in creating custom metrics and the maintenance challenges that arise. The paragraph emphasizes the importance of using a platform that can aggregate and transform data across different tools to provide valid Dora metrics. The speaker mentions that their organization, Obser, has sponsored the DevOps state of DevOps report and encourages the audience to access the latest report for more insights.
🤔 Exploring the Inner Loop and Outer Loop Metrics in Software Development
Gilbert distinguishes between the inner loop, which pertains to developer activity and efficiency metrics, and the outer loop, which involves system productivity metrics like those identified by Dora. He explains that while developers are participants in the outer loop from a downstream perspective, their primary focus is on coding, which is better measured by activity and efficiency metrics. Gilbert also discusses the importance of continuous delivery models and how Dora metrics can help organizations transition from batch releases to a more agile approach. He stresses the need for educating teams about Dora metrics to foster a culture of productivity and efficiency.
🔗 Connecting GitHub Co-Pilot Usage with Dora Metrics for Organizational Benefits
The script addresses the significance of correlating GitHub Co-Pilot usage with Dora metrics to demonstrate the tool's impact on organizational performance. While GitHub Co-Pilot provides an API for usage data, it lacks the granularity required to directly link with Dora metrics. The speaker introduces their platform's capability to aggregate and transform data from various tools, including GitHub Co-Pilot usage, to generate meaningful Dora metrics. This holistic approach allows for a more accurate assessment of the benefits of using GitHub Co-Pilot in terms of software delivery performance.
🚀 Getting Started with Dora and GitHub Co-Pilot: A 14-Day Free Trial Offer
The host offers a 14-day free trial for participants to start using their platform to measure GitHub Co-Pilot usage and Dora metrics. They explain that the trial allows users to connect their existing tools and immediately begin assessing their use of GitHub Co-Pilot and its impact on software delivery performance. The host encourages the audience to take advantage of this no-risk opportunity to gain insights into their development processes and to prepare for the upcoming third part of the series, which will focus on developer experience and security.
Mindmap
Keywords
💡DevOps Huddle
💡GitHub Co-pilot
💡Unified Insights
💡Dora
💡Lead Time for Change
💡Deployment Frequency
💡Change Failure Rate
💡MTTR (Mean Time to Resolution)
💡Inner Loop
💡Outer Loop
💡DevX
Highlights
Introduction to a three-part series focusing on unified insights and GitHub Co-pilot's capabilities and its downstream impact.
GitHub Co-pilot's role in measuring developer activity and usage, along with licensing details.
The importance of understanding what happens downstream after code commits and its measurement with DORA.
Introduction of panelists Ed and Gilbert, their experience in devops and contribution to the discussion.
A poll to gauge the audience's familiarity with GitHub Co-pilot and GitHub's usage in their organizations.
The upcoming third part of the series focusing on developer experience and security posture with GitHub Co-pilot.
Overview of DORA, a 10-year research program by Google, aimed at understanding capabilities for higher delivery performance.
Explanation of DORA's Core 4 metrics: Lead Time for Change, Deployment Frequency, Change Failure Rate, and MTTR.
The practical use of DORA metrics for managers to improve software delivery within their organizations.
Challenges in setting up DORA metrics, including data aggregation across different tools and the pressure for quick results.
The concept of inner loop and outer loop metrics, distinguishing between developer activity and system productivity.
GitHub Co-pilot's API and its limitations in providing data for DORA metrics.
How the devops platform aggregates and transforms data to provide valid DORA metrics despite tool variations.
Demonstration of associating GitHub Co-pilot usage with DORA metrics to show the tool's impact on organizational performance.
A 14-day free trial offer for the devops platform to get started with GitHub Co-pilot and DORA metrics.
Encouragement for the audience to join the next webinar for further exploration of developer experience and GitHub Co-pilot's impact.
Final thanks and sign-off from the hosts and panelists, highlighting the value of the insights shared during the webinar.
Transcripts
[Music]
welcome everybody to the devops Huddle
episode 19 now this is part two of a
three-part series that we've been
running uh about our unified insights
capabilities and more specifically about
our new uh GitHub co-pilot measuring
capabil ities um being able to measure
GitHub co-pilot as well as the
downstream impact of what happens when
you use GitHub co-pilot so if you missed
part one part one was all about GitHub
co-pilot uh what you're measuring
developer activity um actual usage and
then also um you know licensing how much
of your license are being used all those
great capabilities we have now this is
part two where we're going to take what
we learned in part one and apply it to
what happens
after somebody commits their code right
what happens Downstream and being able
to measure that impact with Dora now uh
if you're not familiar with Dora it's
not a big deal we'll cover it later in
this webinar we'll cover all about it so
you'll be fully informed um but let's
start with a nice little
introduction so I'd like to say hello to
my panelists today Ed and Gilbert uh so
I'd love to for them to introduce
themselves Ed why don't you go first
sure I'm Ed SL I'm a sales engineer here
at opsa I spent most of my career as a
developer up until about maybe seven
years ago when I got into devops uh I
worked at gitlab for four years I had
four roles in customer success over
those four years and I've been at obser
for the past 14 months happy to be here
awesome so glad to have you Ed and I'd
also like to introduce Gilbert Gilbert
why don't you introduce yourself hey
guys um I'm a VP of post Services here
at upsera um before s i was a VP of
devops and Cloud operations building
devops teams and building uh devops you
know uh processes and infrastructure so
so happy to be here thanks for having me
great and and great to have two people
who understand what it's like to both
run and lead and be uh contributing
members of development teams and also
understand what it means to um improve
the performance of those teams so I I
think you guys are great panelists and
uh we're going to get started here but
first we're going to talk a little bit
we're going to do a little poll so uh
I'm gonna invite um adid to run a poll
for us and uh why don't you go for it
thank you just
gonna ask you guys just a couple of
questions um you should be able to see
them is your organization currently
using GitHub co-pilot and is your
organization currently using GitHub
yeah and to clarify right if you're
using GitHub co-pilot that would be for
AI PA programming assistance but if
you're using GitHub that would be for
you know source code management or
something like GitHub
actions all right another second for
everyone to answer and it looks like
we've got a pretty even split between
yes and no for everyone using uh GitHub
co-pilot and for GitHub GitHub everyone
is
using and that makes sense to
me um if we think about it right GitHub
co-pilot is pretty new to the market but
everybody at least heard about it maybe
they're not using it yet maybe you're
not uh you know totally on board with it
yet but GitHub is a leader in in um
source code management so it makes sense
for everybody to be quite on board with
using GitHub um so great it's it's nice
to know sort of where everybody stands
uh before we get rolling with this
webinar so so like I said this is part
two of a three-part series and so in
order for us to sort of close the loop
on what it means to use GitHub copilot
and what it means for your business we
have a part three coming up in July it
will be July 25th at 11:00 a.m. Pacific
and it will be all about uniting GitHub
co-pilot with developer experience and
making sure your security posture is
safe so maybe you you learned yes you're
using GitHub Hub co-pilot to improve
developer productivity efficiency um you
know those those commits are being
accepted into production what does that
mean for the developer are developers
actually achieving you know greater
satisfaction with their work are they
doing things better are they feeling
positively about how GitHub co-pilot is
helping them and also is your business
taking a hit to security by using this
new technology and how do you know how
do you measure it so it's all part three
and we love for you to join us for that
as well definitely go ahead and scan
this QR code for a little landing page
that'll give you um a place to sign up
we'll also flash this at the end so that
you have another chance to sign up if
you miss it
now okay so great now we get into the
meat of this business today um so for
everybody who maybe has heard about
GitHub co-pilot or is using it but maybe
you're not really familiar with Dora and
why we're on the call today we're going
to give you a little introduction to
what is Dora so Dora is a 10 year long
uh research and Assessment program the
devops research and Assessment program
run by Google so they've been running it
a long time uh it's bunch of really
great people who are really interested
in understanding what capabilities what
technologies and what processes actually
drive higher delivery performance right
they're really interested in boiling
down all of the different things that
actually lead to better organizational
performance
they have spent years really
understanding from code to
commit what do you need to be successful
as a business as a team as a personal
developer and so they they've they've
interviewed and surveyed thousands of
developers and thousands of
organizations across these 10 years and
have uh issued really awesome
information over this over the course of
this time um obser was fortunate to be
able to be a uh sponsor of the
2023 um devops state of state of devops
report Dora state of devops report um so
you can definitely get that at this link
uh but we're also really pleased to be a
sponsor of the 2024 upcoming report so
if you sign up for this report from 2023
which you can download for free now we
will also uh inform you when the 2024
report comes out because that'll be the
newest and greatest information really
excited to be able to have been sponsors
of last year and sponsors of this year
as well um so yes awesome info um and
and I'm going to hand it over to Ed now
who's going to take you into the Dora
core 4 that is what we're really going
to focus on today which are the core
four metrics that Dora focuses on take
it away
Ed okay thank you okay so like Anna said
Google put a lot of effort and Research
into high performing technology teams
they started 10 years ago they spent a
lot of resources on this and what
trickled out from all of that that
research is these big four kpis so after
all was said and done they said hey if
you measure these four things you're
going to have your arms around what's
happening with respect to software
delivery and your
organization so we'll walk through these
quick um let's say I'd like to start
with lead time for change the bottom
left here this is how long it takes to
go from idea to delivered value to your
customers whatever that means for your
organization so you want to be able to
do this quickly this is how you you hear
about you know deliver software better
faster this is the faster piece of that
um above that is deployment frequency
how fast can you run that play you know
how often are you getting these stories
that you're able to translate into value
so these two together come together to
give you total delivery speed this is
your throughput this is how fast your
organization is able to go uh now on the
right we kind of have the constraining
pieces so on the bottom right change
failure rate how often when you
introduce a change or when you deployed
a prod did you introduce trouble or some
incident that has to be resolved so
that's your change failure rate you want
that to be low you want it to Trend
lower but you would never strive to make
that zero because to do that your lead
time for change would have to go very
high you'd have to put so much rigor in
testing and approvals into your process
that basically your throughput would
grind to a Hal so you want your change
failure rate to beow low but um you
would never try to make it to go to zero
there's going to be problems and that
gets us into the fourth uh metric which
is mttr mean time to resolution or time
to restore so when something bad happens
and Things become unstable how long does
it take you to identify that problem and
back it out you know get back to steady
state get back to things working a very
important characteristic for a system so
really really quick um we see Dora
coming in and being useful for two main
reasons one is maybe I'm a manager and
I'm in charge of making software
delivery better for my
organization the first thing I have to
figure out is what does that even mean
right you know how how am I going to
represent that at the end of the quarter
I'm going to put some things in place
I'm going to figure some things out but
at the end of the quarter I'm going to
want to show hey this is software
delivery before and this is software
delivery after all this great stuff that
I did what am I going to show in those
two slides well the answer to that
question is Dora you know a lot of
organizations and a lot of people try to
try to go roll their own and they figure
it out and I'm going to measure commits
hit in the server and I'm going to
measure this or that but what what came
out of all of the resources that Google
put into this research is these four
kpis the answer to that question is
these four kpis no need to to roll your
own um one of the uh the stories I like
to tell here is I used to be when I was
a developer I used to be very proud of
my ability to a automate testing you
know whatever the the the the gnarly
problem was or application was I would
figure out a way to go in there and
build out some tests that can be run as
part of the pipeline and then you know
you can't get back into main or you
can't get back into Dev until you pass
this this test and I I thought that was
fantastic and it and it was easy to sell
to my management but what ended up
happening is those tests sometimes would
be so complicated and combersome that I
was the only person that could maintain
them and all of a sudden I introduced uh
bottleneck into our process so lead time
for change would go down actually so um
and and the the other problem there was
that the my colleagues had trouble
making that point if you don't have Dora
they had trouble arguing against these
tests that I was saying we're so great
but if you have Dura and you trust that
Dura you can say all right all right Ed
let's let's watch um change failure rate
and let's see if it changes let's back
out that test and replace that big
gnarly test with these three little
simple test and see how change failure
rate is affected and if it's not
affected poorly let's let's make a
decision to cut out that big test so
another way that if you have door that
you trust this is the way that you can
leverage it going
forward um let's see uh support slas the
other piece I want to show here is you
know to show how these things kind of
play together is meantime to restore
what would happen if I said hey I know
we've been running 247 support but I
don't think problems really happen in
off work hours and um and I think we
would be okay just supporting during
working hours you know what you can do
if you have mttr that you trust you
could test that you can run some
application with that SLA for a couple
of weeks watch your mttr and see if it
gets hammered if mttr doesn't change
much you might find that 247 support
isn't worth the squeeze so another kind
of instance where you can use door to
make things better uh in your
ation okay uh next
slide all right so I sold you now on dur
dur is fantastic and you want to do it
and I explained how simple those metrics
are so maybe you want to go create these
things yourself you know all I have to
do is start the clock here and end it
here and and and run this calculation
and I have door right so I'll go do this
myself there are some challenges with
setting this up we see this all the time
with organizations they come out of the
Gates they're going to do this
themselves and and these are some of the
things that they hit so the first thing
is aggregating data a lot of these kpis
they do span tools so you have to you
have to harvest the data from the
different tools you have to aggregate it
across it and make sure things make
sense otherwise you start to get dur
metrics that aren't valid that's not
consistent with what's really happening
so that's a challenge and that challenge
is exacerbated by the fact that you
probably have a couple different
combinations of tools even inside your
organization across your vertical
right this team isn't doing it this way
they're using these tools this other
team has a whole different concept of
what it means to deploy so now you have
to aggregate but you have to do it a
couple different ways according to these
combinations of tools the next piece
that comes in here is you've sold doraa
to your leadership you know everybody
agrees this is a good thing we need to
have it and now there's pressure we want
it now we don't we don't want to wait
till next quarter to get this what what
can you show us you know as soon as
possible so here's this pressure to
produce quickly
um but you know don't don't affect my
developers you know these developers are
working on these other problems and
their schedules aren't to be changed you
know so there's you know another
competing thing that that happen um two
more things risk right what what what is
there out there what are the unknown
unknowns that you're getting yourself
into here when you try to break this off
for yourself and then finally uh updates
and maintenance the idea fery comes out
immediately you know it's great that we
have Dora and we have these kpis but but
can you give me side by-side comparisons
between team a and Team B or can you
give me side-by-side comparisons between
this Sprint and that Sprint last time
you know these kind of these kind of
requests are going to come trickling in
and they will be U they will be
problematic so um so I'm going to pause
there for a second I mentioned earlier
about Downstream pieces associated with
lead time for delivery and to talk more
about that um we be good
all right all right well thanks thanks
guys um so what are what are Downstream
metrics
um I think yeah so so I'm I'm going to
take a little bit of a step back and
really talk about what what the industry
calls you know um metrics right and I'm
going to talk about um I think three
different Frameworks out there in the
industry which could be very confusing
to to all of us right like um we've been
talking about Dora and how Dora you know
has four metrics that Google has led the
industry and now has been been a
standard you know lead time for changes
um you know mttr you know which is now
and has a new name called fail
deployment recovery time um you have
change frequency rate and then you have
deployment frequency right so these are
all like system productivity uh metrics
right so so I just want to kind of um
clear up uh the maybe a little bit of
the confusion just on looking at
Frameworks right there's three different
Frameworks which is Dora space and now a
new framework called devx right so um it
could be super confusing it could be
like what do I use why is Dora so
important right and um I know we don't
have a lot of time because that could be
a whole huddle by itself so so I'm
really just going to kind of give you
examples of what we see out there that
works the best for D metrics and then um
I'll talk a little bit about the
mistifying the inner loop and outer loop
and where Dora fits versus where it
doesn't fit right so um let's just you
know talk about the the inner loop
really quick right the inner loop here
is all about developer key metrics right
and don't confuse these with um with dor
metric because you know it's very easy
to um try to fit developer productivity
into Dora right like kind of like what
Ed just mentioned about tying his test
um and then downstreaming to you know
the devops um what I call the outer loop
which is all the system metrics right
when it comes to key measurements on the
developer side the developers really
don't um necessarily fit in that I'll
call it you know Dora metric space they
they're participants of that from a
downstream perspective but they you know
developers really want to code right so
you really have to think of developer
product activity as um you know two
buckets I I put them in into activity
metrics and efficiency metrics activity
metrics are things like hey closing uh
tickets closing pool requests you know
um deployments um shipping code and and
an opportunity to to make those flows
right where uh where efficiency from a
from an inner loop right and efficiency
becomes to like do does my engineer feel
productive um is he more productive with
having less meetings is there a no
meeting day is there um a you know two
hours of focus times of uninterrupted
you know um time that the developer can
code right it's not about how many times
he committed the code or or how many
pool requests he created those are a
little bit more activity metrics but
those that's that's what I call the
inner loop right so let's now look at
the outer loop which which I think this
is what the session is about it's about
dorom metrics and what what we see in
the industry and what we see the
organizations be very um I'll call it uh
devops trans transformative and
very um very efficient I'll call it or
very effective I will is when you take
the context of moving you know the
industry has been moving from I'll call
it a batch releases to a continuous
delivery model right that's where Dora
comes in where you can start actually
seeing the delivery and how fast of the
delivery you're doing from moving from a
batch or call it a monolith application
to a microservices application and then
how fast did that needle move again Dora
metrics are also an education right it's
a it's a muscle memory like you you also
have to educate your community about
Dora metrics you know um because it's
not going to be day one but it's really
up to your management team to really
clearly identify
what are your you know uh what does
productivity mean within your
organization right so you're able to
understand the metric and then bubble
that up to the executive team and
Leadership so let me let me pause there
um and I'll you know again I'll I'll
just pause there and and let you know
give it back to
ad okay thank you
Gilbert and I'm going to go ahead and
share my screen
all right so we we said earlier in the
agenda that we're why GitHub co-pilot
and dur this is part of a co-pilot
series why are we talking about dur now
and this is something I'm actually
really excited about so um GitHub
co-pilot came out and it's definitely
promising productivity and and things
that the organizations care about it's
wildly successful and it's being Ed like
crazy right now but really what does it
do for the organization you really want
to be able to show a return on that on
that investment um so so what what can
we measure to get that done that's where
door comes in we don't want to we don't
want to start now trying to create our
own metrics to prove the return on
copilot we don't have to Google already
made that effort we can just leverage
that uh that investment so that's where
co-pilot plus Dura comes in and it's
something I'm really excited about uh
let's see
here so but what data is available and
this is kind of where the rub is so
co-pilot uh gith help co-pilot came out
recently with an API that provides usage
data but that usage data is is kind of
at a level it's at that inner loop level
and it's not necessarily something that
the organization uh sees and cares about
immediately you know when you talk about
return if you want to pitch or if you
want to show your leadership that
co-pilot is working out if you talk
about you know hit rates as far as
suggestions that are accepted that's not
going to resonate with them you really
want to get down to the door level so
you at that point you want to be able to
associate the co-pilot usage with the
dur metrics but that door that data that
comes out of GitHub isn't at that level
it doesn't give you down to the user or
even the projects so drawing that
correlation can be
challenging so with that I I do want to
get in and show you our solution to
these problems and we'll kind of show
what what we're doing on that front
to start the demo here just quickly I
want to show you I want to set this
Foundation we are devops platform we do
a lot of different things but right now
we're looking at our tool registry so on
as an onboarding function you come in
here to our platform and you start to
register the tools that you're already
using and many of those tools publish
metrics and when they do we Harvest
those metrics on your behalf so that we
can do things and we can aggregate that
data and make insights from that data
including the co-pilot uh usage data so
um and Dora so we mentioned early that
Dora spans these different tools because
we have this registry of all these tools
we're able to aggregate that data
transform that data and give you valid
uh door metric so I mentioned one of the
challenges is that aggregation it's also
all those different tool combinations we
have eight patents approved nine patents
pending we some of those patents do
apply to our data transformation and
aggregation and that results in our
ability to do D in a valid way against
many different combinations of tools so
we're looking at our Dora dashboard
right here um we can focus this on
particular organizations and things like
that we have deployment frequency at the
top left change failure rate underneath
it lead time for changes and meantime to
resolve uh I'll show you we'll dig into
we can click into any of these the first
thing I'll show is change failure rate
if you click in one of these kpis we
start to give you a graphical
representation of the data we also give
you a table of all the data points that
got rolled into that representation and
into that um that top number for that
kpi so we give you all of that there's a
lot of different ways you can
investigate the data um but I just
wanted to point that out um the next
piece I want to show is taking it back
to co-pilot so here is our co-pilot
dashboard and this is where we're taking
that usage data um from the API and
we're we're showing it to you here which
is very useful you can see adoption rate
how many of the users that have access
to Door are using it we can see
acceptance rate and quality suggestions
this is you know when when copilot
returns something are they tab
completing and accepting that suggestion
or they typing over it and rejecting
these suggestions so that's all here
useful stuff but it's not how healthy
your organization is with respect to
software delivery that's doraa this is
kind of like is the model working are
your people using it I I kind of think
of it as a tachometer in a car and maybe
temperature gauges here important but
it's not what you want to report out so
what you really want to do is you want
to associate your co-pilot usage with
your dur metrics and unfortunately these
um the the API published by GitHub
doesn't give you that granularity
doesn't give you that resolution but
because we are a platform and we have
hooks into all these different tools we
have this holistic approach about what's
happening in your organization we're
able to sus out which commits are are
are done by um with use of co-pilot
versus which aren't and once we can
start to associate commits with co-pilot
usage all of a sudden we can trickle
that information into these different
kpis that are important to us so for
instance lead time for changes does have
a dependence on commits when commits are
hitting the server and because of that
now we can start to add elements to
these interfaces like I'm showing sh
here when I uh tab over this icon I'm
showing this is all of the um all of the
lead time for this organization right
but if I come over here because it does
have a dependence on commits and because
we can do that association with commits
to co-pilot usage now I have a toggle
and I can say give me this durametric
give me lead time for changes just for
co-pilot uses or users right and
similarly or inversely give me lead time
for changes for non- co-pilot users so
in this way we can connect co-pilot to
actual uh actual benefits to the
organization that the leadership
recognizes which is
D so let me let me stop here um are
there any questions coming
in there is one question I see here um
someone's just wondering where do you
start or how can you get
started I I How about if I start that
one um so where to get started basically
where you're at right so you definitely
want to start to get metrics on the
tools that you're using it doesn't take
many tools to start to get some kind of
door out um so start where you're at
start collecting metrics and um Anna can
you I think we have a program going
where we can help start to put that
together absolutely I'm excited to share
this with you so um in addition to a uh
the GitHub co-pilots dashboard that um
Ed shared with you we also have a Dora
dashboard so the what what Ed shared
with you we have a 14-day free trial
just for you to get started and and see
it so um if you sign up here at this uh
QR code you'll bring four of your tools
you'll connect them in under an hour
we'll connect them for you in under an
hour and you'll be able to see you know
your GitHub co-pilot metrics you'll see
what we showed you and then you'll be
able to uh understand how it's being
used um who's using the licensing all
that good stuff and then as part of our
GitHub insights bundle we have the Dora
dashboard so it's a really really easy
way for you to just get started with a
tool that is right out of the box um you
know there's there's obviously for like
uh Gilbert said earlier for a cultural
shift you have to decide what does um
what does success look like for you
Dora's done all that work right they've
done 10 years worth of research to
decide what is successful why don't you
just go out of the box and try it right
so that's what we're trying to do here
for you is make it an easy way for you
to bring the tools that you're already
using bring the teams to the table if
you've got a 100 developers or more you
can get a free trial today so totally
recommend um adid put the link in the
chat as well if if you need to use it um
but yeah this is this is an easy way to
get started and and we're we're really
excited to be able to bring it to you
one of the only ones on the market for
you so you might no risk way of just
getting started right now um so yeah I I
wanted to say very much thank you to Ed
and Gilbert for your time today on this
webinar because you brought a lot of
really good information to us I know it
can be sort of overwhelming especially
that innerloop and outerloop activity
information and so I I will will
recommend again that you join us uh for
the next episode next month where we'll
explore more of that developer
experience um idea we'll we'll dive
further into what it means to take
GitHub co-pilot to take Dora to take you
know your security posture how is it
actually impacting your developer
performance and what they're you know
enjoying about their jobs so highly
recommend you join us for that as well
um I've put the the QR code here for you
for a quick um quick way to sign up and
then I will also uh Flash the 14-day
free trial again for the remainder of
the webinar but I wanted to say thanks
everyone for attending thank you to Ed
and Gilbert uh for for being our
panelist today and thank you to adid for
monitoring the
chat yeah thank you guys thank you for
having us and we look forward to uh you
know hearing uh great more questions and
looking forward to how you how get how
you use GitHub copilot and how has it
been with you with your experience
absolutely can't wait all right thank
you thanks everybody thanks all bye guys
thank you
Weitere ähnliche Videos ansehen
Analisando Pull Requests com GitHub Copilot
¿El nuevo Visual Studio Code? 🔥 ¡ZED, el nuevo editor de código!
話題のツール!CursorとGitHubCopilotの使い勝手を解説してみた
Full Keynote: Satya Nadella at Microsoft Build 2024
I Tried Every AI Coding Assistant
GitHub's Devin Competitor, Sam Altman Talks GPT-5 and AGI, Amazon Q, Rabbit R1 Hacked (AI News)
5.0 / 5 (0 votes)