Most “agile improvement” posts hand you a single tip — try this format, run this meeting, track this metric — and call it a workflow. Real sprint workflow has five distinct stages, each with its own failure mode and its own fix.
This is the hub. Five free toolkits, mapped to the five places sprints actually break: planning, story management, forecasting, retrospectives, and metrics. Pick the one matching what’s broken right now.
Stage 1: Sprint planning
What it is: the meeting where the team commits to what ships next sprint.
How it breaks: committing to too much (capacity ignored), no goal beyond the ticket list (motivation drifts), or the meeting takes 3 hours because refinement happened in planning.
The fix:
- Read the 60-minute planning agenda — what each block does and what should never happen in it.
- Use the Sprint Capacity Calculator to bound capacity by available days, not optimism.
- Use the Sprint Goal Generator to anchor the sprint on outcome, not output.
- Forecast with the Sprint Forecaster at p85 — defensible commitment, not wishful thinking.
The whole bundle: /sprint-planning ← start here if planning meetings run long, miss commitments, or drift into ticket triage.
Stage 2: Story management
What it is: writing stories that finish in a sprint without surprise scope.
How it breaks: stories too big to finish (no split applied), acceptance criteria that hide scope (“yes but also”), estimates anchored on hours not relative size, or DoR ignored so unready stories enter the sprint.
The fix:
- Read INVEST applied (with examples) — bad → good rewrites for each criterion.
- Use the Story Splitter when a story is too big to finish in one sprint.
- Use the Story Points Estimator for relative sizing without planning-poker overhead.
- Read acceptance criteria formats to catch hidden-scope patterns.
The whole bundle: /story-management ← start here if stories carry over, scope creeps mid-sprint, or estimates feel meaningless.
Stage 3: Forecasting
What it is: telling stakeholders when the work will land, with numbers that hold up.
How it breaks: forecasting on the mean cycle time (which lies), reporting one date with no confidence interval, or recomputing the forecast every sprint and getting a different answer.
The fix:
- Read Forecasting at p85 — why the mean lies, the math, the stakeholder message.
- Use the Cycle Time Calculator to analyse your team’s distribution (p50/p85/p95, histogram, tail-shape diagnosis).
- Use the Sprint Forecaster for Monte Carlo at p85.
- Use the Burndown Generator once committed, to track in-flight.
The whole bundle: /forecasting ← start here if dates miss, stakeholders don’t trust the forecasts, or you can’t explain why.
Stage 4: Retrospectives
What it is: the meeting that’s supposed to make next sprint better than this one.
How it breaks: same format every sprint (signal dulls), actions get logged but nobody owns them, complaints repeat because nothing changed last time, or the retro disappears into the calendar void.
The fix:
- Read why retros die after 3 sprints — five structural patterns and the fix for each.
- Use the Retro Template Generator to vary format (start/stop/continue, 4Ls, sailboat, mad-sad-glad, KALM).
- Download the Retrospective Action Tracker — owner per action, due-by sprint, carry-forward.
- Read sprint review without demo theatre for the sister meeting.
The whole bundle: /retrospectives ← start here if retros feel pointless, actions don’t ship, or the team has stopped engaging.
Stage 5: Metrics
What it is: measuring whether sprint work is actually getting better.
How it breaks: tracking velocity in isolation (Goodhart’s law: it stops meaning anything once it’s a target), reporting only mean cycle time (hides the tail), or tracking commit-vs-complete with no second metric.
The fix:
- Read agile metrics that actually matter — four useful, four to skip.
- Use the Velocity Calculator for capacity sense (rolling 6-sprint average + planning range).
- Use the Cycle Time Calculator for flow (p50/p85/p95).
- Read velocity dropped — what to do when something shifts.
The whole bundle: /metrics ← start here if dashboards lie, leadership trusts the wrong numbers, or velocity has become a target.
Which one first?
If you don’t know where to start, here’s a triage:
- “We commit to too much and miss” → /sprint-planning + /forecasting (capacity + p85)
- “Stories carry over every sprint” → /story-management + /sprint-planning (split + plan tighter)
- “Stakeholders don’t trust dates” → /forecasting (p85 with confidence intervals, not mean point estimates)
- “Retros are dead” → /retrospectives (action tracker + format rotation)
- “Dashboards say good but it feels bad” → /metrics (four useful, four vanity, Goodhart trap)
- “Sprints feel chaotic but I can’t pin why” → /metrics first (measure to find the leak), then the matching toolkit
Each toolkit is free, no signup, runs in your browser. SprintFlint has all five built into the product if you want them in your sprint workflow rather than as standalone tools — but the tools work fine on their own.
What this whole stack assumes
Three things, worth being explicit about:
- Cycle times have long tails. Forecasting at the mean lies. Most agile advice predates this realisation; everything in these toolkits is built around it.
- Goodhart’s law applies to every metric. Track multiple, never optimise for one. Velocity in isolation rots.
- Process problems are usually structural, not behavioural. “Try harder” doesn’t fix retros that die or stories that carry over. The fix is usually a small structural change (an action tracker, a DoR, a different statistic).
Build the stack from these assumptions and sprints stop being a Sisyphean ritual and start being a tool that compounds.