Burndown charts are the most photogenic agile artefact and the most misread. A perfect-looking burndown often hides a broken sprint; a “messy” burndown often signals a healthy one. This post is the honest read: when burndowns help, when they lie, and what to actually do with one.
The chart, briefly
A sprint burndown plots remaining work (in story points or hours) against time. The “ideal line” runs from total points on day zero to zero on the last day. The “actual line” is what the team really has remaining, day by day.
The point of the chart is to show, visually, whether the sprint is on track relative to a constant-rate ideal.
When burndowns help
Three situations where the chart genuinely tells you something useful.
1. Mid-sprint check-ins
By day 5 of a 10-day sprint, you should have burned roughly half. Looking at the chart in standup or a mid-sprint check-in answers a useful question in 3 seconds: are we obviously off track, or roughly on?
If the actual line is dramatically above ideal at the midpoint, something’s wrong — either you over-committed, you’ve been blocked on something, or the work is bigger than estimated. The chart doesn’t tell you which, but it surfaces the fact that the conversation needs to happen now, not on day 9.
2. Cross-sprint pattern recognition
A single burndown is noisy. Five sprints’ worth, viewed together, reveals patterns:
- Burndown that’s flat for the first 5 days then crashes: classic sign of work being done but not closed. Tickets sit at 80% complete.
- Burndown that drops smoothly mid-sprint then plateaus: sprint scope was wrong. Team was on track, then either ran out of work or hit a blocker.
- Burndown that mirrors the ideal almost exactly across multiple sprints: this is suspicious, not impressive. Real work is messier than that. Probably the team is gaming the chart by closing tickets in batches, not in real time.
If you can’t see the chart for the past 5 sprints, you’re missing the most useful view of all.
3. Spotting late-sprint cliff dives
Healthy burndowns descend roughly steadily — with realistic noise, but trending down. A chart that’s flat for 8 days then drops sharply on day 9 means your team finished work in the last 24 hours that should have been closed throughout the sprint. That pattern is one of the strongest signals that:
- The team is afraid to close tickets early (no one wants to be “done” before the deadline)
- Or, more commonly, that “done” requires a code review that takes 2 days, so things finish in batches
- Or that the team is actually rushing at the end and quality is suffering
The cliff dive deserves a retro item every time.
When burndowns lie
Most of the time. Here are the four most common ways.
1. The “in review” black hole
Burndowns track a binary: ticket open vs ticket done. They don’t track progress. A team can have 8 of 10 tickets at “in review” — meaning they’re 95% done — and the burndown looks identical to the team having 8 of 10 tickets at “todo,” meaning 0% done.
This is the single biggest reason burndowns flatline mid-sprint. Real work is happening, but it’s stuck waiting on review.
The fix: don’t try to fix it inside the burndown. Track cycle time on tickets in review separately. If your average ticket spends 3 days in review on a 10-day sprint, you have a review-bottleneck problem, not a velocity problem.
2. Re-estimating mid-sprint inflates and deflates the line
If the team adds 2 points of unplanned scope on day 4, the chart shows the actual line jumping up. If the team realises a ticket is bigger than estimated and re-points it, same. The chart’s vertical position becomes a function of bookkeeping, not work.
The fix: don’t re-estimate during a sprint. The estimate at start-of-sprint is the contract. If new scope is added, log it but don’t change the chart — write a separate note. The “spillover” total at end-of-sprint is more useful than a clean-looking burndown.
3. Story-point inflation hides slowdowns
If a team’s velocity is dropping and they don’t want to admit it, the easiest unconscious fix is to inflate point estimates. Same throughput, more points, velocity looks fine. The burndown line drops at the same rate.
This isn’t malicious; it’s a normal psychological response to looking bad on a chart. But it makes the chart actively misleading.
The fix: spot-check estimates against past similar tickets. If a 2-point ticket from 3 sprints ago looks structurally similar to a 5-point ticket today, ask why.
4. Day-of-the-week noise
Real teams don’t burn down at a constant rate — Mondays are slower than Wednesdays, Fridays are slower than Mondays, etc. Plotting against an “ideal line” that assumes constant burn-down is a useful summary, but it makes Monday-mornings look bad even when the sprint is fine.
The fix: don’t read the chart day-by-day. Read it on a 2-day rolling average, or at standard checkpoints (mid-sprint, end-sprint).
What burndowns are not for
Three uses we see all the time that don’t actually work.
Performance management
Burndowns are team-level artefacts. Trying to derive individual performance from them is statistically meaningless and culturally toxic. Don’t do it.
Forecasting next sprint
Velocity (a 3-sprint average) is the right tool for forecasting. The current sprint’s burndown tells you about this sprint, not the next one. Use the Sprint Forecaster for projections — it’s built on velocity, not burndown.
Justifying scope cuts to stakeholders
A stakeholder pointing at a burndown chart and saying “you’re behind, cut feature X” is not engaging in honest planning. Don’t enable that. The right tool for stakeholder conversations is the sprint goal — is it at risk or not? — not the burndown.
A useful complementary chart
Burndown alone is incomplete. Pair it with one of:
Cumulative flow — shows how many tickets are in each status (todo / in-progress / in-review / done) over time. Reveals the “in-review black hole” instantly. If “in-review” is a fat band on the chart for days, that’s your problem.
Cycle time per ticket — average days from “in-progress” to “done.” Reveals process latency that burndown can’t.
We surface both inside SprintFlint by default — the burndown is the marketing-friendly chart, but cumulative flow is what the team actually uses to debug.
The honest playbook
Read your burndown like this:
- Day 1: ignore it. Real work hasn’t happened.
- Mid-sprint: glance for 5 seconds. If the actual line is dramatically above ideal, ask why. If it’s roughly tracking, move on.
- Day before sprint end: look for a flat-then-cliff pattern. If you see one, that’s a retro item.
- End-of-sprint: ignore the chart. Look at: did you hit the sprint goal, what spilled over, why.
- Once a quarter: stack the last 6–8 burndowns side by side. Look for the pattern signals above.
Don’t run standup off the burndown. Don’t make decisions from a single sprint’s chart. Don’t show it to stakeholders to justify decisions. Use it as one signal among several, and treat suspiciously clean ones as suspicious.
Why we still ship them
If burndowns lie this often, why are they everywhere? Three reasons:
- They’re glance-able. A team can absorb a burndown in 3 seconds. That’s not nothing.
- They make missing work visible. Even a misleading flat line draws attention. Drawing attention is half the battle.
- They’re a shared reference. “Look at the chart” is a conversation starter that “look at our cumulative flow diagram” isn’t, in most cultures.
So the chart isn’t useless — but its job is to prompt the right conversation, not to answer the right question. Read it that way.
What we ship in SprintFlint
- Burndown chart per sprint, no setup
- Cumulative flow alongside it
- Cycle-time per ticket and per status
- 6-sprint history view that stacks past burndowns
The point isn’t a prettier burndown. The point is the surrounding context that lets you actually read it.
SprintFlint gives engineering teams burndown, cumulative flow, cycle time, and velocity — without spreadsheets, without Power-Ups, without a 2-hour setup. £5/user/month flat. Start free — 300 tickets, no card.