Your team’s velocity just dropped 30% over two sprints. The pattern is unmistakable in the chart. Stakeholders have started asking. The PM is looking at you. You have planning tomorrow.
This is one of the most stressful moments in engineering management. It’s also one of the most consistently misdiagnosed.
Here’s the actual playbook — what to check first, the seven real causes (in order of probability), and what not to do under pressure.
First: don’t panic, and don’t blame the team
Before any diagnosis, two things to internalise:
A 1-2 sprint dip is noise, not signal. Velocity is naturally noisy — illness, holidays, one nasty ticket. If your last sprint was 32 points and the previous five were 38-42, that’s variance, not a trend. Don’t react.
A 3-sprint drop is signal. If three consecutive sprints are 25-30% below the previous baseline, something has changed. Investigate.
The biggest failure mode at this stage isn’t a wrong diagnosis — it’s a manager who reacts to noise as if it were signal, and starts pushing harder. That’s how you turn a temporary dip into a real one.
What it almost never is
Before listing the real causes, name the wrong ones. These are the explanations managers reach for first and they are almost always wrong:
- “The team is slacking.” No. Engineers who were shipping at velocity X don’t suddenly stop caring. Something changed in the work or the environment, not in their motivation.
- “They need more discipline.” No. If your process was working at velocity X, it’s still working. Adding ceremonies makes things worse.
- “They need a kick.” No. The team can already see the chart. They feel the same pressure you do. Adding pressure compounds the problem.
If you reach for these, slow down. The actual cause is almost always structural.
The seven real causes (in order of probability)
1. Scope quality changed
This is the #1 cause and it’s invisible in the chart.
What it looks like: tickets are getting bigger, harder, or fuzzier than they used to be. The team picks up “5 points” of work, opens it up, and realises it’s actually 13. They finish the work — but it’s not the same work. The chart shows a velocity drop. In reality, the points/work calibration drifted.
How to check: pull the last 5 “5-point” tickets. Read them. Are they roughly comparable to the 5-point tickets from 8 sprints ago? If the new ones are gnarlier (more dependencies, more unknowns, more legacy code), you’re not slowing down — you’re working on harder problems with the same scale.
The fix: stop comparing recent velocity to older velocity. Reset the baseline. Or — better — fix scope. Push for clearer tickets, smaller stories, fewer unknowns hitting sprint planning.
2. People changed
Someone left. Someone joined. Someone is now doing 30% PM work. Someone’s on parental leave for 4 sprints.
How to check: pull team capacity for the last 3 sprints vs the previous 5. Use focus factor not headcount — a senior engineer doing 70% interviewing isn’t 1.0 capacity, they’re 0.3.
The fix: if it’s a temporary capacity drop, name it explicitly in planning. “We’re at 0.7 capacity this sprint, so target should be 28 points not 40.” Stop pretending the team is at full capacity when it isn’t.
3. Tech debt has crossed a threshold
Velocity drops gradually as a codebase ages, then sometimes drops sharply when a critical piece of infrastructure starts buckling. A test suite that was 8 minutes is now 45 minutes. A deploy that was 5 minutes is now 30. CI is flaky. The build breaks twice a day.
How to check: cycle time for a “trivial” ticket (one-line copy change, version bump, etc). If a 1-point ticket now takes 2 days to ship, the bottleneck isn’t the work — it’s the system around it.
The fix: dedicate 20% of next sprint to the specific bottleneck (test speed, deploy reliability, build flakiness). This is the highest-leverage intervention available and the most under-applied one. A team that can’t ship a 1-point ticket in under 4 hours has a problem that compounds.
4. Dependencies / external blockers
The team’s work now depends on another team’s work. They’re waiting on API access, design specs, security review, legal review. Tickets get into “in progress” and sit there.
How to check: count tickets in “in progress” / “in review” with no movement for 3+ days. Read why. If “blocked by other team” comes up more than once, that’s the cause.
The fix: name dependencies before sprint planning, not during the sprint. If a ticket needs design that won’t exist for a week, don’t plan it this sprint. Move it. Plan only work the team can actually finish.
5. Story point inflation reversed
If your team has been generously sizing tickets (rounding 3s up to 5s, 5s up to 8s) for several sprints, the velocity chart is artificially inflated. When estimation calibrates back to reality — usually after a refinement push or a new tech lead joins — velocity “drops” without anyone working any less.
How to check: ask the team if estimation has felt more honest recently, or if a new person has joined who’s pushing back on inflated estimates. If yes, the drop is calibration, not regression.
The fix: explain it to stakeholders. Reset the baseline. Compare next sprint to next sprint, not to the inflated past.
6. The work has shifted from feature to refactor / infrastructure / discovery
A team that was shipping features hits a phase where they’re doing a major rewrite, infrastructure migration, or pre-feature discovery. This work is real and necessary, but it doesn’t fit cleanly into “story points per sprint.”
How to check: look at ticket types. If 60% of last sprint was infrastructure / refactor / spike work, the velocity number is misleading. The team is working as hard, just on harder-to-size work.
The fix: track infrastructure work separately from feature velocity. Two metrics: feature points and “investment” hours. Don’t conflate them.
7. Process drift / new ceremonies
Someone added a new meeting. Someone added a security review step. Someone introduced a longer code review SLA. Each individual change is small. The cumulative effect is real.
How to check: list every new process change in the last quarter. Add up the per-person hours. If the team has 3 hours less actual coding time per week per person, you’ve found 12% of your velocity drop.
The fix: question every recent process addition. What does it actually catch that wasn’t being caught before? Is the catch worth the cost?
What to do tomorrow in planning
You have planning in the morning. The chart looks bad. Here’s the one-meeting playbook:
- Open with honesty. “Velocity has dropped over the last 3 sprints. I want us to plan based on actual capacity, not the old number. We’ll diagnose properly in retro — for now, let’s plan honestly.”
- Use the most recent 3 sprints to set capacity, not the rolling 5. If recent velocity is 30, plan for 30. Resist the temptation to “stretch.”
- Cut scope, don’t pad estimates. If you have 40 points of “must-have” but capacity is 30, cut 10 points of must-have. Don’t try to estimate it down.
- Identify one structural blocker for the team to address. Test speed, dependency tax, ticket clarity — pick one. Reserve 20% of capacity for it. This is the lever, not raw effort.
This single meeting — held honestly — restores more confidence than any “we’ll catch up next sprint” promise. Stakeholders prefer a real number to an aspirational one.
What NOT to do
In rough order of damage caused:
- Don’t add ceremonies to “increase visibility.” More standups, more status reports, more sync-ups — all of these reduce the time available for the work that’s already running short.
- Don’t ask the team to “commit harder.” Velocity isn’t a function of commitment, it’s a function of actual capacity and work clarity. “Commit harder” sounds like leadership; it lands like blame.
- Don’t reset story points to “make the chart look better.” Inflating estimates to match historical velocity is fraud. It also makes future planning impossible.
- Don’t measure individuals. “Alice’s velocity dropped” is the worst possible diagnostic. Story points are a team metric.
- Don’t assume it’s permanent. Most velocity drops have a specific cause that, once found, has a specific fix. Treat it as a puzzle, not a verdict.
The retro after the dip
Once you’ve planned the next sprint honestly, run a focused retro on the dip. The format:
- What changed. Make a timeline of the last 12 weeks. Departures, new joiners, process changes, project starts, infrastructure incidents. Name everything.
- What we noticed. Have each person write the one thing that’s been most painful in the last month. Read silently first, then discuss.
- What we’ll change. Pick one structural fix. Not three. One. Run it for 3 sprints, then re-evaluate.
A team that has just dropped velocity is fragile. Don’t pile changes on top. Pick the one fix that will move the needle most, ship it, see what changes.
The honest summary
Velocity drops happen to every team. They’re rarely about effort, almost always about scope, capacity, or system health. Diagnose calmly, plan honestly, fix one thing structurally, and resist the urge to add pressure.
The teams that recover quickly aren’t the ones that worked hardest after the dip — they’re the ones that named the cause clearly and addressed it without adding ceremony. That’s the actual work of leadership in this moment.
Velocity is a feedback signal. When the signal moves, it’s telling you something. Listen to it instead of trying to silence it.
Tools to help:
- Sprint Velocity Calculator — track velocity over the last 5 sprints, free, no signup.
- Sprint Capacity Calculator — set realistic capacity using focus factor.
- Sprint Health Check — quick diagnostic across goal-hit-rate, scope creep, blockers, retro action follow-through.
- Sprint Maturity Self-Assessment — 10 questions across 6 dimensions, with three concrete next-sprint actions per tier.
SprintFlint surfaces velocity, cycle time, and goal-hit-rate trend automatically. When the chart moves, you see why before stakeholders do. Free for the first 300 tickets — no card.