July 8, 2025 5 min read

🧑‍🚀 Operational Readiness & Resilience III: DORA

Luke Curtis

Luke Curtis

Engineering Leader

Header image

What

Spearheaded by Google Cloud, the DevOps Research and Assessment (DORA) framework provides a standardized way to measure engineering throughput and system reliability — helping teams understand how effectively they’re delivering software. It tracks the following 4 things:

Interpreting the Metrics

With these numbers in mind you can identify teams that are performing with a high throughput or modelling good practices that can be used as a template for other teams (maybe even adding to your engineering readiness for example).

Also the results may highlight where you may need to introduce additional checks and balances to improve reliable delivery. For example, you may release the lead time for change is high due to lots of flaky pipelines in CI/CD.

However, it's important to caveat that as an engineering leader, I usually frame these types of numbers through the narrative of the beginning of a conversation and not something to blindly move towards getting the "best" metric possible.

You can imagine for example, it would be relatively easy to "game" deployment frequency by deploying multiple times with marginal diffs, which would not be an ideal outcome. Instead, focus on instilling a culture of iteratively delivering impact on products that are ongoing, and ensure your deployment frequency stays (or increases) to a reliable and rate that everyone is happy with.

How

Depending on your size of org, this will vary.

For smaller teams something as light touch as Jira Control Charts for things like deployment frequency may be enough, providing you're tracking your tickets appropriately and have integrated your version control platform of choice into the platform.

For tracking failure rates and time to restore service, tools like incident.io offer really good insights, and exports that will allow you to add this into your usual workflows.

For a more robust and medium/large org setup, DevLake is easily one of the most fully fledged platforms to start understanding these numbers in detail. There is a lot of documentation on how to set this up so it may take time and some wider business coordination.

Making this data actionable

With all this data now to hand there should ideally be a period of reflection, look at the trends, see if you're consistently hitting similar numbers over time, if not see where you may be finding inconsistencies in delivery for the team.

Once you're happy with the stability of the numbers, it's time to start thinking about "what good looks like" here. For each pillar get some definitions of what ok/good/great is, aligned with your team and build towards that.

As an example:

Metric OK (Baseline) Good Great
Deployment Frequency Weekly Several times per week On-demand (multiple per day)
Lead Time for Change 1 week 1–5 days Less than 1 day
Change Failure Rate 5–10% 5–1% Under 1%
Time to Restore Service 1 day A few hours Under 1 hour

You can tweak the actual ranges depending on your team’s maturity and risk appetite — but this gives a clean, aspirational baseline to build toward. These numbers for example are really quite lax.

DORA metrics shouldn’t be used as a scoreboard, but as a lens into your system’s health. When used well, they help leaders identify where to invest in tooling, automation, and process to support better engineering outcomes. Start small, share the data openly, and evolve your approach with the team over time.

Luke Curtis

Luke Curtis

Engineering Leader with over 10 years of experience in building and leading high-performing teams. Passionate about transforming organizations through technical excellence and empowered engineering cultures.

Stay Updated

Subscribe to receive the latest insights and articles directly in your inbox.