Engineering Management Guide

How to Identify Engineering Bottlenecks with Jira and GitHub

11 min readAmy Wightman, Co-founderUpdated April 2026

Engineering bottlenecks rarely announce themselves. By the time a manager notices one, it's usually been slowing the team down for weeks. Jira and GitHub contain most of the signals — the trick is knowing where to look and what the patterns mean.

What engineering bottlenecks actually look like

A bottleneck is any point in your development process where work queues up faster than it gets processed. In manufacturing, bottlenecks are obvious — you can see the pile on the conveyor belt. In software, work is invisible, which is why bottlenecks are so easy to miss.

Common forms engineering bottlenecks take:

Code review bottleneck

PRs are open for days before getting reviews. Work piles up in "In Review" or gets merged without adequate review.

Single-person dependency

One engineer is on the critical path for too many decisions, reviews, or pieces of code. When they're busy, everything else waits.

Testing bottleneck

QA or testing cycles take longer than development. Features complete coding but sit for a week before being testable.

Deployment bottleneck

Code is ready but the deployment process (approvals, environments, scheduling) creates delays.

Requirements bottleneck

Engineers finish work but can't start the next thing because requirements aren't ready. Jira shows tickets with no work started for weeks.

Context-switching bottleneck

Engineers are spread across too many initiatives and can't complete anything before moving to the next priority.

Why bottlenecks are hard to see in time

Engineers often don't report bottlenecks directly. There are a few reasons.

First, bottlenecks are often embarrassing to name. Saying "I'm blocked waiting for Alex to review my PR" feels like throwing a colleague under the bus. Engineers find workarounds — picking up other work, refactoring, writing tests — rather than surfacing the actual problem.

Second, bottlenecks develop slowly. No single day feels like a crisis. The PR that's been open for four days doesn't look alarming. The fourth PR open for four days — that's a pattern. But you need to be looking at the aggregate to see it.

Third, the person causing a bottleneck often doesn't know they are. The architect who reviews all security-sensitive PRs isn't trying to slow things down — they're trying to maintain quality. The manager's job is to surface the structural problem without making it personal.

What to look for in Jira

Jira records the lifecycle of every ticket. Bottlenecks leave specific signatures in that data.

Cycle time by stage

Measure how long tickets spend in each status column. If "In Review" averages 4 days and "In Development" averages 2 days, review is your bottleneck — not development.

How to find it in Jira:

Jira's built-in Control Chart (Scrum boards → Reports → Control Chart) shows cycle time by issue. Filter by status transition to see where time is accumulating.

Work in progress (WIP) by person

A ticket assigned to someone for two weeks that hasn't moved might mean they're blocked, overloaded, or the ticket is too large. Multiple such tickets by the same person confirms the pattern.

What to watch for:

Any engineer with more than 3–4 in-progress tickets simultaneously is likely context-switching rather than delivering. One or two tickets "in progress" for more than a sprint is a sign of a hidden blocker.

Tickets with old status dates

Sort your active sprint by "last status change." Tickets that haven't moved in more than 3 days are candidates for investigation. Tickets stuck for more than a week are almost certainly bottlenecked.

Backlog aging

A growing backlog isn't always a bottleneck — it might just be that you're capturing more work than before. But a backlog where items stay for multiple sprints without being picked up often signals that the team is perpetually full and work is queuing.

Sprint completion rate

If the team consistently finishes 60–70% of committed sprint work and the rollover is concentrated in specific ticket types or assignees, that's a bottleneck signal rather than just poor estimation.

What to look for in GitHub

GitHub data is some of the richest signal for engineering bottlenecks because it reflects actual work output, not just ticket status.

PR age

How old are your open PRs? Sort your open PRs by creation date. Any PR open more than 3 business days without a review is a signal. PRs open for a week or more indicate a systemic review bottleneck.

Healthy benchmarks:

  • First review within 1 business day: excellent
  • First review within 2–3 business days: acceptable
  • First review after 4+ days: bottleneck

Review concentration

Pull the last 30 days of PR reviews and count who reviewed what. If one or two people account for 60%+ of reviews — especially if those people are in high-demand roles like tech lead or security — you have a single-point-of-failure bottleneck.

This pattern is particularly dangerous because the bottleneck person is often your most senior engineer, which means their time is doubly expensive.

PR size

Large PRs take longer to review and create review bottlenecks even when reviewers are available. A PR with 500+ changed lines typically gets a cursory review or sits in queue while smaller PRs jump ahead.

What to look for:

Track your average PR size over time. If average PR size is growing, it's often because engineers feel they can't get reviews, so they batch more work into each PR. This is a bottleneck symptom that makes the bottleneck worse.

PR iteration count

PRs that go through 5+ rounds of review before merge often indicate an upstream problem — unclear requirements, missing design decisions, or reviewers who change their minds. High iteration PRs from the same area of the codebase often point to tech debt that needs architectural attention.

Commit frequency gaps

An engineer who commits daily suddenly goes 5 days without a commit. This is almost always a sign they're blocked — by a dependency, a confusing problem, or a personal situation. It's one of the most useful early signals in GitHub data.

What standup data adds

Jira and GitHub show you what's happening. Standup updates tell you why. The two data sources together are more powerful than either alone.

Repeated blockers

When the same blocker appears in standup updates three days in a row, it's not getting resolved. This is the clearest async signal that something needs manager intervention. Tools that detect blocker patterns automatically save managers from having to read every standup line by line.

Vague progress reports

"Still working on the auth refactor" for five days in a row, paired with no commits in GitHub, is a hidden blocker. Engineers often don't say "I'm blocked" when they're stuck — they describe their intended work as if it's still in progress.

Named dependencies

"Waiting on the data team for the schema changes" names the bottleneck directly. Cross-referencing these named dependencies with Jira ticket status often confirms that the dependency is real and untracked.

Common bottleneck patterns and their causes

What you see in the dataLikely causeRoot fix
PRs age 4+ days; review concentration in 1–2 peopleInsufficient reviewer capacityExpand reviewer pool; define review SLAs
Tickets sit in "In Progress" for 2+ weeks with no commitsHidden blocker or unclear requirements1:1 investigation; break ticket into smaller pieces
Sprint velocity declining; backlog growingTeam overloaded or context-switchingReduce WIP limits; prioritize ruthlessly
PRs with 500+ lines; high iteration countEngineers batching work due to slow reviewsFix review speed; set PR size guidelines
Same blocker in standup 3+ days; no resolutionCross-team dependency not being escalatedManager escalation; formal dependency tracking
One engineer commits; others stallBus factor; shared code owned by one personPairing; documentation; intentional knowledge spread

What to do once you find a bottleneck

Name it without assigning blame

Bottlenecks are almost always structural, not personal. "We have a review bottleneck" is a process problem. "Alex reviews too slowly" is a personal accusation that misses the point. Start with the pattern, not the person.

Trace it upstream

The visible bottleneck is often not the root cause. PRs aging isn't just a reviewer availability problem — it might be that reviewers are overloaded because the team is too small, or that large PRs take too long, or that the codebase is too complex. Follow the chain.

Fix the process, not the person

The right fix is almost always structural: a new process, a new tool, a redistribution of work, or an org change. Individual coaching can help people work faster within a broken system, but it doesn't fix the system.

Measure before and after

When you implement a fix, track the same Jira and GitHub metrics to see if it worked. Average PR age, cycle time by stage, and sprint completion rate are the clearest indicators. Give it two sprints before declaring success or failure.

Preventing bottlenecks from forming

The best time to address a bottleneck is before it becomes one. A few practices that make bottlenecks visible early:

Weekly Jira review

15 minutes per week reviewing ticket ages, WIP by person, and sprint progress. Most bottlenecks are obvious once you're looking for them.

PR review SLAs

A team norm that all PRs get at least one review within 24 business hours prevents review backlogs from forming. It also forces the conversation about reviewer capacity before it becomes a crisis.

Async standup with explicit blocker tracking

When your standup tool flags repeated blockers automatically, you catch them in 2–3 days instead of 2–3 weeks. That gap is often the difference between a minor slowdown and a missed deadline.

WIP limits

Explicit limits on work in progress prevent the context-switching bottleneck from forming. When engineers can't pick up new work until something is done, they surface blockers instead of starting something else.

Knowledge spreading

The single-point-of-failure bottleneck is preventable with intentional pairing, documentation, and rotation through code ownership. Track areas of the codebase where only one engineer has meaningful context.

Surface Bottlenecks Before They Become Crises

The Engineering Manager Toolkit connects your Slack standups with Jira and GitHub so blocker patterns surface automatically — before they hit your delivery dates.