Your Jira Board Is Lying to Leadership

Everyone Knows Except Leadership

I've sat in hundreds of sprint reviews. Across agencies, DTC brands, enterprise. Across North America, Europe, and Asia.

And I can tell you exactly when the meeting stops being useful. It's the moment someone pulls up the velocity chart.

120 points completed. Green arrows. Trend lines going up and to the right. Leadership nods. "Great velocity this sprint."

Meanwhile every engineer in the room is staring at the same screen thinking the same thing.

That number is fiction.

Not because anyone lied. Because the system was never designed to tell the truth about what engineering actually does. And the moment someone put a velocity chart in a leadership deck, a workflow tool became a performance scorecard. And performance scorecards get gamed. Always. Without exception. Not out of malice. Out of survival.

The Invisible Tax

Here's what the board didn't show in my last sprint review.

An engineer spent six hours creating tickets for distributed team members. Detailed acceptance criteria. Context they'd need because they're eight time zones away and can't tap you on the shoulder at 2 PM. Screenshots. Links to the relevant services. Edge cases that would've become bugs without the upfront investment.

Six hours. Zero points. On the board, that engineer looks less productive than the person who mass-created five one-line tickets and assigned them without context. But the first engineer's tickets shipped clean. The second engineer's tickets came back three times with questions.

Jira doesn't capture that. Jira captured the points.

A story got split into four tickets because it was "too big for one sprint." Was it too big? Maybe. Or maybe the team learned that smaller ticket counts look better in the velocity chart. That one 8-point story that ships as a cohesive feature became four 2-point stories that each technically pass QA independently but don't actually deliver value until all four merge. The board shows 8 points of progress across the sprint. The customer got nothing usable until day 10.

A 3-point bug fix took two days. Not because the fix was hard. Because the root cause was in a system nobody documented. The engineer spent a day and a half reading code written by someone who left two years ago, understanding why it was built that way, confirming the fix wouldn't break three downstream services, and then writing the actual fix in 45 minutes. The board says 3 points. The reality was an archaeology expedition that prevented a production incident.

An architect did zero points of "work" this sprint. Literally zero tickets closed. But they spent three hours in a whiteboard session that unblocked three teams. A design decision that resolved a dependency conflict four teams had been dancing around for a month. No ticket. No points. No evidence it happened except that suddenly three teams started shipping again.

The board doesn't know any of this. Leadership doesn't know any of this. The only people who know are the engineers who lived it.

How We Got Here

Nobody woke up one morning and decided to use Jira as a performance measurement system. It crept in the way most bad practices do. Slowly. Reasonably. One decision at a time.

It started with a fair question. Leadership wanted to know if engineering was on track. That's not unreasonable. They're accountable for outcomes. They need some signal that work is progressing.

So someone built a dashboard. Velocity over time. Sprint burndown. Tickets completed per engineer. All pulled directly from Jira because the data was already there and the alternative was asking engineers to write status reports, which everyone agreed was a waste of time.

The dashboard looked clean. The numbers went up. Leadership felt informed.

And then the numbers became the goal.

Not explicitly. Nobody sent an email saying "your performance will be measured by Jira velocity." But engineers are pattern matchers. They watched what leadership paid attention to. They watched which teams got praise in all-hands meetings. They watched what questions got asked in quarterly reviews.

The pattern was clear. High velocity equals good. Low velocity equals questions.

So the system adapted. Because systems always adapt to measurement pressure. Every time. In every organization. This is not a Jira problem. This is Goodhart's Law playing out exactly the way it always does. When a measure becomes a target, it ceases to be a good measure.

The Behaviors Nobody Talks About

Once velocity becomes the signal leadership watches, a cascade of behaviors follows. All of them rational. All of them destructive.

Ticket Inflation

Engineers learn that more tickets equals more points equals better optics. A feature that should be one story with clear acceptance criteria becomes four stories, each representing a horizontal slice that doesn't deliver value independently. The board looks busier. The customer gets the same thing they would have gotten, just tracked in a way that flatters the chart.

I've watched teams double their velocity on paper without changing their actual output by one line of code. They just got better at writing tickets.

Now let me be clear. I'm not saying small tickets are the problem. Championship-level teams work in shallow vertical slices. Ship bread and butter first. Then add the ham. Then cheese. Each slice delivers real value independently and is small enough for a meaningful code review. That's how high-performing teams operate.

The problem is when tickets get small for the dashboard instead of for the delivery. There's a difference between slicing work so each piece ships value to the customer and slicing work so each piece ships points to the chart. The first is engineering discipline. The second is theater.

Avoidance of Hard Work

When your performance signal is tickets closed, you optimize for closeable tickets. That means avoiding the work that matters most but tracks worst. Refactoring a critical service? That's one ticket that takes two weeks. Building five small features? That's five tickets in the same two weeks. The refactoring prevents six months of production incidents. The five features add marginal value. But the five features look better on the dashboard.

The hardest and most valuable engineering work is almost always the work that looks worst on a Jira board. Untangling a dependency that's been slowing every team for a quarter. Writing documentation that saves 200 hours of onboarding over the next year. Having the conversation with the PM about why the thing they want built shouldn't be built at all.

None of that shows up in velocity.

The Jira Hygiene Tax

Here's the one that kills me. I've watched engineers spend 30 to 45 minutes a day managing their board. Moving tickets. Writing updates. Splitting stories. Adding comments so the status is clear. Not for themselves. Not for their team. For the dashboard that leadership checks every Monday morning.

That's 3 to 4 hours a week. Per engineer. On a team of 10, that's 30 to 40 hours a week of engineering capacity spent making a dashboard accurate. An entire engineer's worth of time. Not building software. Not solving problems. Managing the system that's supposed to track the software they're building.

If your tool costs you a full engineer in overhead, the tool isn't working for you. You're working for the tool.

What Velocity Actually Measures

I want to be precise about this because the problem isn't that velocity is useless. The problem is that it's being used for something it was never designed to measure.

Velocity is useful for one thing. Helping a team forecast how much work they can take on in the next sprint based on how much they completed in previous sprints. That's it. It's a planning tool for the team. A way to avoid overcommitting. A signal to say "we consistently complete about 40 points of work per sprint, so let's not plan for 80."

When you use velocity for planning within a team, it works. Imperfectly. With caveats. But well enough to be useful.

When you use velocity to compare teams, it breaks. Because points aren't standardized. My team's 5 is not your team's 5. When you use velocity to measure individual performance, it breaks worse. Because it incentivizes exactly the behaviors I described above. When you use velocity to report progress to leadership, it breaks completely. Because it conflates activity with value.

120 points completed tells you nothing about whether the customer's experience improved. Nothing about whether revenue moved. Nothing about whether the platform is more stable or the team is healthier or the technical debt is lower.

It tells you that engineers moved tickets from one column to another. That's all.

The Architect Problem

I want to stay on the architect example because it exposes the deepest flaw in ticket-based measurement.

The most valuable engineering work often has no ticket. Design decisions that prevent months of rework. Code reviews that catch a security vulnerability before it ships. A conversation in a Slack thread that redirects a team away from an approach that would have failed. A 20-minute whiteboard session that resolves a disagreement between two teams and unblocks both of them.

I've seen architects who generated zero points per sprint while being the single most valuable person in the engineering org. Their impact wasn't in tickets. It was in the decisions they shaped, the mistakes they prevented, and the teams they unblocked.

Try putting "prevented a six-figure production incident by catching a race condition in a code review" in a Jira ticket. What's the point value? Three? Eight? It doesn't fit the model because the model wasn't built for this.

And here's where it gets dangerous. If the measurement system can't capture this kind of work, engineers learn not to do this kind of work. They learn that the stuff that gets measured is the stuff that gets rewarded. So they close tickets. They ship features. They stop having the conversation that would have prevented the problem because prevention doesn't have a ticket.

You get a beautiful velocity chart and a codebase that's slowly rotting underneath it.

What I've Learned Works Instead

I've been getting this wrong for years. I used to build the dashboards. I used to report velocity to leadership. I used to believe the numbers meant something about the health of my team.

They didn't. They meant something about the health of our ticket management process. Those are very different things.

Here's what I do now.

Jira Stays with the Team

It's a coordination tool. Engineers use it to track their own work, manage dependencies, and communicate status to each other. That's what it's good at. That's all I ask it to do.

Leadership Gets Outcomes

When I report to a VP, I don't show them a velocity chart. I show them what shipped, what it impacted, and what's coming next. Features delivered that customers are using. Revenue impact where we can measure it. Risks reduced. Stability improvements. Time to market changes.

These are harder to produce than a Jira dashboard screenshot. That's the point. The easy report tells you nothing. The hard report tells you what matters.

Are We Hitting Our Commitments

Here's the question that should replace the velocity chart entirely. Did the team do what it said it would do?

I track quarterly commitments with 95% confidence across my distributed teams. Not points. Not tickets closed. Whether we delivered the outcomes we committed to, in the timeframe we committed to.

If a team is consistently hitting their goals and shipping what they promised, who cares what the velocity chart says? The work is landing. The commitments are being met. The business is getting what it asked for.

Velocity tells you how busy the team looks. Commitment tracking tells you if the team delivers. One of those matters to the business. The other is a vanity metric dressed up in a dashboard.

The 15-Minute Rule

If an engineer on my team spends more than 15 minutes a day on Jira hygiene, something is broken. Either the process is too heavy, the ticket structure is too granular, or someone upstream is using the board for something it wasn't meant to do. I track this informally. When I hear someone complain about ticket management in a retro or a one-on-one, that's the signal.

Pay Attention to Questions

The best signal of engineering health isn't what numbers the board shows. It's what questions the team asks. Are they asking about customer impact? Are they pushing back on scope? Are they raising risks early? Are they challenging whether the work is worth doing before they start building?

A team asking good questions will outperform a team with high velocity every single time. Because the first team is thinking about value. The second team is thinking about throughput. And throughput without direction is just organized motion.

The Conversation Nobody Wants to Have

Here's the uncomfortable part. The part that makes this harder than just switching from velocity to outcomes.

Leadership wants velocity charts because velocity charts are easy to read. Green good. Red bad. Up good. Down bad. You can glance at it in 30 seconds between meetings and feel informed.

Outcome-based reporting is harder. It requires context. It requires understanding the work. It requires asking follow-up questions. It requires trusting that the engineering leader is telling you the truth when they say "we shipped less this sprint but what we shipped matters more."

That's a trust problem. And you can't solve a trust problem with a dashboard.

The organizations I've seen do this well have something in common. Engineering leadership and business leadership have a relationship built on direct communication, not mediated by a project management tool. The engineering leader says "here's what we delivered, here's what it means, here's what's at risk." The business leader asks questions. They go back and forth. They reach a shared understanding that no dashboard could have produced.

That takes more time than checking a velocity chart. It also produces better decisions than any velocity chart ever has.

The Meta Problem

There's one more layer to this that I think about a lot.

The Jira measurement problem is a symptom of a bigger issue. The assumption that engineering work can be quantified by tracking tasks.

Engineering isn't manufacturing. We don't produce units. We don't have a consistent cost per widget. A feature that takes three days might be worth ten million dollars in revenue. A feature that takes three months might be worth nothing. The time spent has almost no correlation to the value produced.

But we keep trying to measure engineering the way we measure factories. Throughput. Utilization. Cycle time. Points per sprint. All borrowed from contexts where the work is repetitive and the output is standardized.

Engineering work is neither of those things. The most important thing an engineer does in a given week might be a conversation that changes the direction of a project. It might be a decision not to build something. It might be deleting code. None of which fits in a ticket.

I don't have a clean answer for how to measure engineering perfectly. Nobody does. But I know that measuring it badly and pretending the numbers mean something is worse than admitting the measurement is hard and doing the work to report on actual outcomes.

Your Jira board is a tool. A good one, when used for what it was built for. The moment it becomes the source of truth about whether engineering is delivering value, it stops telling the truth.

And everyone on your team already knows that.

It's time leadership heard it too.