What do the teams at Stack Overflow, DataStax and Reprise have in common?
First, they’ve all built amazing organizations powered by amazing developers.
Tips, Expertise, Articles and Advice from the Pro's for Your Website or Blog to Succeed
What do the teams at Stack Overflow, DataStax and Reprise have in common?
First, they’ve all built amazing organizations powered by amazing developers.
In a previous article, we looked at sprint velocity best practices. But how do we measure our velocity to know how long each sprint is going to take and plan ahead?
In this guide, we’re going to dive into:
This week we have another episode from the 2021 engineering leadership conference INTERACT. In this live conversation I interview Henrik Gütle, GM of Azure for Microsoft Canada.
Henrik joins the podcast to break down the results and key takeaways of Microsoft’s research into the impact of remote work on developer velocity - and what engineering leaders can learn from it.
Despite decades-long efforts of the whole agile community—books, blogs, conferences, webinars, videos, meetups, you name it—we are still confronted in many supposedly agile organizations with output-metric driven reporting systems. At the heart of these reporting systems, stuck in the industrial age when the management believed it needed to protect the organization from slacking workers, there is typically a performance metric: velocity.
In the hands of an experienced team, velocity might be useful as a team-internal metric. But, when combined with some managers’ wrong interpretation of commitment, it becomes a tool of oppression. So when did it all go so wrong?
Many people dislike estimating work items as estimates supposedly open the path to the misuse of velocity by the managers, reintroducing Taylorism, micro-management, and excessive reporting through the backdoor. To them, for example, the proponents of #noestimates, estimates conflict with basic ideas of agile product development such as self-management, becoming outcome-focused, or leaving the feature factory for good.
I like to suggest a different, less ideological approach: estimates are useful at the team level, just ditch the numbers. How so? Estimation of work items is a fast way for a Scrum team to figure out whether all team members are on the same page regarding the why, the what, and the how of the upcoming work. The numbers are a mere side-effect, probably still valid to inform the team, though. (Indeed, the numbers are not intended to be used beyond the team level.)
Informing someone that you want to “measure” them is not a great way to start a conversation. Software developers, like all people, tend to look unfavorably upon having their performance closely measured. But measuring developers is one of the hottest trends for companies around the globe. So is it tyranny to measure people?
People are quick to note that numbers don’t tell the whole story and can become defensive at the notion their productivity should be quantified somehow. This resistance can become even more entrenched when teams become stacked against each other.
Velocity in agile development measures the quantity of work a team can accomplish in a sprint. It can be measured in story points, hours or days. The higher the velocity of a team, the more features it delivers, the more value it brings to customers. Sprint velocity is a good measure in sprint project management to evaluate and estimate team productivity.
The measure of the velocity is based on multiple factors: the continuous integration (CI) process, the time to qualify the code changes, to test the regression, the security, the delivery, etc…
One of my greatest privileges building Stepsize has been hearing from hundreds of the best engineering teams in the world about how they ship software at pace while maintaining a healthy codebase.
That's right, these teams go faster because they manage technical debt properly. We're so used to the quality vs. cost trade-off that this statement sounds like a lie—you can't both be fast and maintain a healthy codebase.
Productivity in software development is typically tricky to measure. Is it how fast your team is doing something? It has been proven time and again that lines of code are a poor measure; are the number of modules an indicator? The degree of module reuse within a project, or from previous projects?
This post is the second article in our Tactical Guide to a Shorter Cycle Time five-part series. Read the previous post here.
You discover your engineering team has a long Cycle Time compared to the rest of the organization or compared to the industry’s top performers. Now what?
To understand the current and future state of DevSecOps, we gathered insights from 29 IT professionals in 27 companies. We asked them, "What problems are solved by DevSecOps – where is the greatest value realized?" Here's what they told us:
In complex and uncertain environments, more is unknown than is known. And what we know is subject to change. Only what we have achieved is known (unless we prefer to cover up). Progress is in what we have done, more than in what we plan to do. What we plan to do are assumptions that need validation by emerging actions and decisions. We make and incrementally change decisions based on what is known.
In Scrum, it is considered a good idea for teams to know about the progress they have been making. It is one parameter (of several) to take into account when considering the inherently uncertain future.
One common misconception of Agile is that it simply allows you to get everything done faster. This is simply not true. Agile allows us to plan a much smaller scope of work, delivering iteratively and incrementally to deliver the least amount scope needed to solve the problem/capture the opportunity. The speed comes from only delivering what the customer needed. This is in contrast to how we used to scope a release when we delivered everything we thought they might want.
This is stakeholder debt. I define stakeholder debt as the difference between everything they scoped for the release, minus what the customer uses.
Debugging can lead engineers into the dark corners of a codebase, giving them time to gain a deep understanding of some section of code, but this vital information — the bug and the fix — often stays in the brain of the engineer who worked on the issue.
What happens if a similar bug appears in the future? Do you task another engineer with shipping a fix or is it best to pull the original engineer away from their existing work? In this blog, we’ll dive into best practices for sharing knowledge within the engineering organization for startups.