Jira was meant to track work, but as teams scale it often becomes a fragile reporting layer that requires constant maintenance and manual intervention. Over-customized workflows, brittle automations, and sprawling integrations create a coordination tax that pulls engineering leaders back into the weeds. The core problem is that Jira reflects what gets updated, not what is actually happening in delivery, making it a lagging and unreliable signal for flow health. High-performing teams keep Jira simple and rely on flow metrics and real-time execution signals to surface stalls before leadership has to chase them manually.

Most teams don't choose Jira. They inherit it.
Someone set it up years ago, probably with good intentions. Workflows were customized to match "the process." Automation rules were added to reduce manual work. Integrations were connected to create a single source of truth.
Now, two years later, your team is spending real engineering hours maintaining it. Tickets pile up in states nobody remembers creating. Automation rules fire in the wrong order or stop firing entirely. The board that was supposed to give you confidence in delivery mostly shows that people have been busy updating Jira.
At Fraction, we've worked with over 150 engineering teams, managing delivery across startups, scale-ups, and enterprises. This is one of the most predictable patterns we see in scaling organizations. The promise was simple: configure Jira correctly and you get visibility into delivery. The reality is that most Jira instances become reporting layers that require constant feeding but never quite reflect what is actually happening. And the coordination tax it generates lands squarely on you, the CTO or engineering lead who now spends mornings in Slack and standups trying to get the signal Jira was supposed to provide.
Jira tracks work. It does that well when it stays simple. Basic workflows, minimal fields, straightforward states. Tickets move naturally as work progresses.
The problems start when Jira gets promoted to something it was never built for: the system of record for delivery visibility. I've watched this happen dozens of times, and it almost always follows the same trajectory. The team grows past 15 or 20 engineers. Leadership wants dashboards. Product wants burnup charts. You want accurate sprint velocity across multiple squads. So workflows get customized. Fields multiply. Automation layers build up to keep everything in sync.
What nobody accounts for is the maintenance cost that follows. Every custom field adds a decision point. Every workflow state becomes another place tickets can stall. Every automation rule becomes something that can break when the underlying data changes. I've inherited Jira instances where even the most senior engineer on the team couldn't explain why half the workflow states existed, because the person who built them left eighteen months ago.
The gap between "track work" and "monitor delivery health" is where the maintenance trap lives. And as your team scales, the trap gets deeper. More squads means more workflows. More workflows means more automation. More automation means more things breaking quietly while you're trying to focus on architecture, hiring, and the actual technical problems your team was built to solve.
Ask anyone on a Jira-heavy team to explain the workflow. If they need to pull up a diagram or open settings to remember what comes after "In Review," that's a signal.
Workflows are meant to reflect how work actually moves. In practice, they reflect how someone thought work should move months ago, before reality intervened.
The pattern I see across teams is consistent: the more workflow states you add, the more tickets get stuck in ambiguous states and the less reliably engineers update them. The tipping point tends to be around five or six states. Beyond that, complexity doesn't improve visibility. It creates more places for work to idle without anyone noticing. And the most common driver of that complexity is designing workflows to answer reporting questions instead of serving the people doing the work.
Leadership wants insight into review time, so a "Code Review" state gets added. Product wants clarity between ready for QA and in QA, so both appear. Compliance needs proof of security checks, so another mandatory state is inserted.
Each change makes sense on its own. Together, they turn the workflow into an obstacle course. Engineers stop updating tickets promptly because it's unclear which state applies. Your tech leads start moving tickets manually just to keep boards accurate. The workflow meant to create clarity becomes a source of noise, and you lose trust in the board entirely. Once that happens, you're back to asking people directly, which is exactly the coordination tax you were trying to eliminate.
Jira automation is genuinely useful for repetitive tasks. Auto-assigning tickets, posting Slack notifications, linking subtasks to epics. But automation is easy to add and hard to remove, and rules accumulate quickly.
Someone leaves the company and their auto-assignment rules keep firing. A field gets renamed and multiple automations silently stop working. A new integration is added and two systems start updating the same ticket, creating conflicts that go unnoticed until a sprint review. In a growing engineering org, this compounds fast. Every new squad, every reorg, every process tweak adds potential failure points to an automation layer that nobody owns holistically.
There's a principle from DORA's research on deployment automation that I think about often, even though it's talking about CI/CD pipelines rather than Jira: automating a complex, fragile manual process produces a complex, fragile automated process. The domain is different but the dynamic is identical. If your Jira workflow is convoluted, layering automation rules on top doesn't simplify it. It just makes the failure modes harder to trace.
Another failure I see constantly: using automation to enforce process instead of reducing friction. Automatically marking tickets as blocked after inactivity sounds proactive. But if nobody addresses why work is stalling, the automation just hides the root cause behind a status label. Automation works when it removes steps the team already performs reliably. It fails when it tries to manufacture discipline that doesn't exist yet.
Jira integrations promise simplicity. Connect everything and see the whole picture in one place.
I've set up these stacks myself, many times. Code hosting for activity. Slack for team awareness. Confluence for specs. CI pipelines for deployment status. On paper, it creates a unified view. In practice, it introduces fragility that nobody budgets for.
APIs change. Webhooks fail silently. Sync rules drift. You open Jira and see a ticket marked "In Progress" while the pull request merged three days ago and nobody noticed the connection broke. I've watched teams lose entire weeks debugging integration failures that nobody even knew existed until a delivery commitment slipped and nobody could explain why.
Each integration adds another dependency and another failure mode. Teams end up checking multiple systems anyway, defeating the original goal. The most reliable integrations I've seen are one-directional. Pulling context into Jira helps. Trying to keep multiple systems perfectly synchronized usually doesn't. And as your team scales, the number of integration points grows faster than anyone's ability to monitor them.
Here's the core problem. Jira shows what people update in Jira, which is not the same as what is actually happening in delivery.
A ticket marked "In Progress" might be moving smoothly. Or it might be blocked and forgotten. Jira can't tell the difference unless someone updates it manually. And under delivery pressure, manual updates are the first thing that slips. Burndown charts show work being completed. They don't show whether that work is landing in production, piling up in review, or being reopened repeatedly due to defects.
Jira is a lagging indicator. By the time a problem appears in ticket status, it has usually existed for days. For a CTO or Head of Engineering, this means the signals you're relying on to make resourcing, prioritization, and commitment decisions are always slightly behind reality. And when you can't trust the board, you compensate the only way you know how: more standups, more Slack check-ins, more time spent personally chasing the signal. That's coordination tax. It's the reason you're in six syncs a week instead of three, and why your calendar looks like it belongs to a project manager rather than a technical leader.
This is why teams build layers of dashboards and reports on top of Jira. It helps somewhat, but it adds more maintenance. Every workflow change breaks queries. Every field update requires fixing reports. Jira tracks tasks, not flow, and it can't show stalled work, growing queues, or emerging bottlenecks without significant manual effort.
The teams I've seen use Jira well all do something counterintuitive: they keep it simple and they don't rely on it for delivery health. Here's what that actually looks like in practice.
A minimal workflow that holds. The highest-functioning teams I've worked with use three or four states at most. To Do, In Progress, In Review, Done. That's it. Some teams collapse Review into In Progress and run with three. The key insight is that every state should represent a meaningful change in who is responsible for the work. To Do means nobody has picked it up. In Progress means an engineer owns it. In Review means someone else needs to act. Done means it shipped. If a state doesn't change who needs to do something, it's not a workflow state. It's a reporting label, and it belongs in a field, not in your board. I've watched teams cut their workflow from twelve states to four and see ticket hygiene improve within a sprint, because engineers could actually remember where things go.
Flow metrics instead of task completion. Over a decade of DORA research points to the same conclusion: high-performing teams measure delivery through flow-level signals, not through task completion in a project management tool. But what does that mean day to day? It means instead of asking "how many story points did we close this sprint," you're watching lead time, which is how long it takes from when work is committed to when it reaches production. Cycle time, which is how long active work takes once someone picks it up. Queue depth, which is how many items are waiting at each stage of your workflow. And review latency, which is how long PRs sit before someone looks at them. These four metrics tell you more about delivery health than any burndown chart. When lead time starts creeping up, you know something is slowing down before tickets start piling up visibly. When review latency spikes, you know your bottleneck is in handoffs, not in engineering capacity. When queue depth grows in one stage, you know exactly where work is idling.
Setting up lead time tracking without buying anything. Jira already has the timestamps you need if your workflow is clean. When a ticket moves to In Progress, Jira records it. When it moves to Done, Jira records that too. The difference is your cycle time. If you add the timestamp from when the ticket was created or committed to a sprint, you have lead time. Jira's built-in control chart shows this, but most teams never look at it because it's buried and their workflows are too messy for the data to mean anything. With a clean three or four state workflow, Jira's native reporting becomes usable. Export your ticket transition history to a spreadsheet once a week, calculate the median lead time and cycle time per squad, and trend it over four to six weeks. That alone will tell you more about your delivery trajectory than any dashboard someone spent a month building. If you want something slightly more structured, Jira's cumulative flow diagram shows queue buildup by stage over time. Most teams I've worked with have never opened it. Once they do, they immediately see where work is pooling.
What to do Monday morning. If your Jira is already a mess, you don't need to fix everything at once. Start here. Audit your workflow states. If you have more than six, pick the four that represent real handoffs and collapse the rest. Run a quick filter for tickets that haven't been updated in more than a week to see where work is quietly stalling. Pull up Jira's control chart or export your transition history and calculate your median cycle time for the last month. Share the number with your leads. Most teams are shocked by what they find, and that shock is what creates the willingness to simplify.
This matters more as you scale. A five-person team can get by with standups and gut feel. Once you're past 15 or 20 engineers across multiple squads, the coordination complexity grows faster than your ability to manage it through meetings. The teams that handle this transition well are the ones that shift from "the CTO knows what's happening" to "the system surfaces what needs attention." Flow metrics are the mechanism for that shift. They replace the need for you to be in every room.
Most teams never plan for Jira maintenance. It quietly becomes someone's job. According to Wellingtone's State of Project Management research, 50 percent of project professionals spend a day or more each month manually collating project status information, and 47 percent don't have access to real-time KPIs for their projects.
I've seen this across engineering orgs of every size. Fixing broken automation. Updating dashboards. Cleaning unused fields that can't be removed because legacy reports depend on them. Explaining obscure workflow states to new hires. In a scaling team, this work usually falls on tech leads or senior engineers, exactly the people whose time is most expensive and whose attention should be on architecture, mentoring, and shipping. When your best people are spending hours a week keeping the delivery tool alive instead of delivering, the cost is invisible on any dashboard but very real in your velocity.
The teams that struggle most are the ones who inherit heavily customized Jira instances built over years by people who are no longer there. At that point, the options are aggressive simplification or accepting Jira maintenance as a permanent tax. Ignoring the problem is the most expensive option of all.
Jira automation can reduce repetitive work. Workflows can standardize ticket movement. Integrations can add context. None of that is the problem.
The problem is asking Jira to tell you whether delivery is healthy. It can't. Not reliably.
Healthy delivery requires understanding flow: seeing stalled work, rising review times, and growing bottlenecks across version control, CI pipelines, and communication channels. Those signals exist in your stack right now. They're just not visible in Jira without constant manual effort.
Jira is flexible enough to track almost anything, but flexibility brings complexity. The more it's customized, the more it requires upkeep. The more it's relied on for visibility, the more teams depend on perfect human updates under pressure. Under pressure is exactly when those updates stop happening.
Teams get the most from Jira when they use it for what it does well and look elsewhere for delivery health. But that creates a structural problem most teams never solve cleanly: someone still has to watch the signals across Jira, code repos, and Slack as a connected system, follow up when things stall, and escalate when work drifts. In practice, that someone is you. The CTO or engineering lead who ends up as a human routing engine, doing management dirty work in between everything else. The signals exist. They just don't trigger action on their own.
DevHawk monitors execution signals across your delivery stack in real time. When a ticket sits in "In Progress" with no commits for 48 hours, when PRs age past review thresholds, when work stalls across a handoff between time zones, it identifies the stall and triggers a follow-up to the right owner. If it still doesn't move, it escalates based on rules your team defines.
This is not a replacement for Jira. Jira tracks the work. DevHawk watches whether the work is actually moving, and prompts action when it isn't.
For CTOs and engineering leads who've become the human escalation layer, DevHawk takes the management dirty work off your plate. Instead of starting your morning in Slack reconstructing what happened overnight, the system surfaces what needs attention and routes it to the right person. You get your signal without the coordination tax. Your leads get their deep work time back. The follow-up loops that used to depend on your personal attention run on their own.
One thing I learned the hard way: DevHawk works best when ownership is clear. If tickets don't have real owners, or if "done" means different things across the team, automation amplifies confusion rather than resolving it. The prerequisite for any follow-up loop is knowing who is accountable for what.
If you want to see how real-time delivery signals work across a Jira and Slack stack, DevHawk is built for that.
Why does Jira automation break so often?
Rules accumulate faster than they get maintained. When someone leaves, renames a field, or changes a workflow state, existing rules silently stop working. In scaling teams, this compounds because more squads means more automation with nobody owning the full picture.
How many Jira workflow states should a team actually use?
Teams start losing adoption past five or six states. Every state you add is a place tickets can idle without anyone noticing. Keep workflows minimal and use flow metrics like lead time and cycle time for the visibility that extra states were trying to provide.
At what team size does Jira maintenance become a real problem?
Most teams hit the tipping point around 15 to 20 engineers across multiple squads. That's when workflow complexity, automation sprawl, and integration fragility start compounding faster than anyone can maintain them manually.
Related: async standups, PR review bottlenecks, delivery metrics that matter, blocker detection for distributed teams, code ownership in practice
Sources cited: