A few years ago, growth was about hiring more people. Today, growth is about efficiency—doing more with less. As an Engineering Leader, you must align your teams to achieve business goals while improving efficiency. Not every organization has the luxury of hiring more people, so what do we do? We capture each source of inefficiency and thrive on removing them.
After working with several organizations over the years, we have noticed five common pitfalls that slow teams down. More importantly, we prepared a list of recommendations to avoid them.
Many engineers think they’re productive when busy. Instead of waiting, they tend to send their pull requests to a colleague for review and start a new work item. The colleague reviews the pull request once they finish their task, so the pull request is idle, causing delays in the value of delivery. The results? Pull requests are accumulating, review time is increasing, and value takes longer to capture.
You can measure the lifecycle of how long it takes to go from a commit to its deployment. The metric is called the Lead Time for Changes and is one of the four key DORA metrics in DevOps. You can split the lifecycle into stages and spot trends. For example, here is an image where you can see the coding time (from a commit to its PR opening), the pickup time (from the opening of a PR to its first interaction), the review time (from the first interaction until it’s merged) and finally the deployment time (from the time it’s merged until it’s deployed in production).
Splitting the lifecycle into phases makes it easier to find the bottleneck. A typical behaviour we observe is when review time increases. It’s a sign that engineers open more pull requests without prioritizing them or that the PRs are too big, so engineers avoid reviewing them.
In this situation, there are some options to consider:
Bringing a change to a product takes time. The longer it takes, the more cost-heavy it is to introduce that change. The more cost-heavy it is, the fewer failures you want, then you spend more time planning to avoid failure. The reality is that every feature and user story is considered a bet; no one can know in advance if this will work as expected, even if you’re convinced. Planning bigger items has a ripple effect of adding more requirements as we go.
Measuring cycle time over time is generally an excellent way to observe this trend. A longer cycle time indicates that an item takes longer to complete because of its size. Look if the team is stable or has variations in the cycle time of their deliverables (e.g., features), issues (e.g., user stories), or pull requests. Typically, look at a long enough period (e.g., 90 days) and higher variations (e.g., ±15%).
“It can always be smaller”. That’s a motto we keep repeating. Smaller items tend to flow through the system faster. You can apply it to deliverables, issues, and pull requests! Here are a few other tips:
Quality assurance and quality control are different. Quality assurance introduces quality into your process, while quality control verifies the product. When you lack test automation, it creates a more significant necessity for quality control after development, thus increasing the time to bring a change to the market. The lack of mock of external systems establishes a need to test in an integrated environment, leading to dependencies between teams, increased errors and increased time to market. Not to mention, when another team corrupts the environment, nobody can test and deploy in production anymore.
When your deployment time is long, i.e., the time between when a pull request is merged and when it’s deployed in production, it’s a sign that you have many control gates or environments to go through. Elite teams have a Lead Time for Changes under 24 hours, so if you have several days in deployment time, it could indicate that you lack automation.
It’s time to focus on shift left testing. There are a few things you can work on:
Aligning efforts to achieve business goals while operating efficiently is a challenging mission for an engineering leader. You must prioritize creating new value, keeping the lights on, and improving things. Do you spend too much time on bugs or infrastructure work? Is there shadow work that increases time spent on keeping the lights on and thus not spent on priorities?
You can inspect how your team occupies its time. Significant variations in time spent on bugs or new value can be a symptom of a big batch of changes or a “bug-only sprint.” Focusing too much on new value can also indicate that the team accumulates technical debt and never addresses it, resulting in a big-bang refactor later.
When everyone is busy, we think we’re going faster when we’re going slower. Everything progresses slowly, and the customer receives the value later. This also increases the cost of delay. When the WIP is higher, items are waiting, creating more opportunities for handoffs and leading to longer market time.
Follow the team’s WIP and compare it to the number of team contributors. Is the trend stable? Do we tend to start more things, or do we tend to start fewer things?
You can also follow the stability of your workflow or a cumulative flow diagram. The mantra is “Stop starting, start finishing.” Inspect if you start significantly more items than you complete. Do you complete all items at the end of the sprint in a big batch, or does it feel more like a continuous flow?
The first step is to start measuring yourself. The first set of metrics I suggest gathering is the DORA metrics. This will give you a first picture of the team’s delivery performance and let you know which teams need more attention from the pack.
Tools such as Axify integrate seamlessly with your tech stack to collect accurate data at all phases of development. Our DORA metrics dashboard tracks Deployment Frequency, Lead Time for Changes, Change Failure Rate and Failed Deployment Recovery Time. It allows teams to compare their performance with industry benchmarks, past performance, and other teams in the same organization to identify areas for improvement and celebrate successes.
Our teams’ insights allow you to visualize DORA metrics for each team. They offer the advantage of comparing apples to apples on two important engineering efficiency factors: speed and stability. You can quickly see which team could benefit from more attention and which could share their best practices for better performance.
Transform how your team sets and achieves goals with our objective and key results tracking tool. See immediately the evolution of your performance indicators and implement initiatives that support the continuous improvement of your development team.
Contact us for more information on how we help development teams measure DORA KPIs and improve their engineering efficiency.