Featured Content
How to Design Better KPIs Before They Mislead You

The Soviet government once set a KPI for nail factories: the number of nails produced. Factories responded by making millions of tiny, useless nails. Identifying the issue, the government switched the metric to kilograms. Factories responded this time by fabricating a giant, useless nail. The actual goal, a useful variety of nail sizes for a functioning economy, never appeared in the formula.
This is not a story about Soviet central planning. It is a story about what happens when a metric becomes the goal — and why knowing how to design better KPIs matters more than most dashboards suggest. I have seen it happen in modern corporate dashboards, built by competent people, approved by management, and trusted for months: green dashboards with invisible problems.
The dashboard that told the truth and nothing else
Some time ago, I worked with a company that, among its many KPIs, tracked outsourced personnel as a percentage of total headcount. For management, the logic was straightforward: keep the ratio below a threshold, control costs. The formula was clean.
% Outsourced = Outsourced Headcount / Total Headcount
Most of the time, this card was green as the ratio sat comfortably below target, quarter after quarter.
What the report did not show was the rotation rate of the outsourced workforce. Contractors were churning at high frequency; each replacement meant a new hire, a full onboarding cycle, weeks before the replacement reached productive speed. The cost of that cycle, repeated dozens of times per year, was real. Despite its importance, it appeared nowhere in the report.
The KPI was not wrong. It was incomplete. And incomplete, in this case, meant misleading.
Why KPI design fails: formula incompleteness
A KPI is an indicator, not a goal — and that distinction is where better KPI design begins. The distinction sounds obvious until you watch a management team spend forty minutes in a review discussing whether the metric is 0.3 points above or below a threshold, without once asking what the metric is actually trying to indicate.
Business Performance Management literature has documented this failure for decades. One failure mode has a name most analysts know: Goodhart’s Law — when a measure becomes a target, it ceases to be a good measure. That version is about gaming. But this post is about a quieter problem: formula incompleteness. The metric was never designed to game. It was simply built to answer one question, and over time that question became the only question anyone asked. The goal had other dimensions. The formula did not.
The Soviet nail factory is an extreme version. Corporate KPIs are a quieter version, but the result is the same: numbers look good, the underlying business reality does not, and the gap between them grows until something breaks.
Before and after
Here is what the workforce dashboard looked like before anyone asked the harder questions.
One formula. One card. One answer to the wrong question.
One measure, one card, one answer. The report told you the ratio. It did not tell you what was happening underneath it.
% Outsourced =
DIVIDE ( [Outsourced HC], [Total HC] )
After the conversation changed, three measures replaced one.
% Outsourced =
DIVIDE ( [Outsourced HC], [Total HC] )
Outsourced Rotation Rate =
DIVIDE ( [New Outsourced Starts in Period], [Outsourced HC] )
Est. Retraining Cost =
[Outsourced Rotation Rate] * [Avg Onboarding Cost CHF] * [Outsourced HC]
Same data source. Three questions instead of one.
The original KPI did not change: still 14%, still on target. What changed was the context. Rotation at 68% per year meant the outsourced workforce was turning over entirely every eighteen months. An estimated CHF 240K per year in onboarding cost had been invisible, not because the data did not exist, but because nobody had built the measure.
The gap between what the formula measured and what the goal required looked like this.
Every KPI has a blind spot. The question is whether you have named it.
How to design better KPIs: four questions before you go live
The problem is not that people build bad metrics — most analysts who want to design better KPIs are already asking the right questions. It is that the stress-testing happens too rarely, and usually only after something breaks. Here is the check to run before any KPI card goes live.
Four questions. Run them before the card goes on the dashboard.
The four questions, unpacked
1. What is the actual goal this number is supposed to indicate? Write it out. Not the metric name — the business outcome. “Control workforce cost” is a goal. ”% Outsourced below 20%” is a metric. They are not the same thing.
2. Can this formula be satisfied while the goal is being missed? This is the adversarial test. Assume someone wants to keep the number green while the underlying situation deteriorates. How would they do it? If you can answer that easily, you have found your blind spot.
3. What is this formula not measuring that could matter? A metric always has edges. Name them explicitly. For the headcount ratio, the edges were rotation rate and onboarding cost. Neither appeared in the original report.
4. What complementary measure makes the blind spot visible? Once you have named the gap, build the measure. A blind spot you have named is just a gap. A blind spot you have not named is a risk.
The second question is the hardest and the most important. Run the scenario in your head: if someone wanted to keep this metric green while the underlying situation deteriorated, how would they do it? If you can answer that question easily, you have found your blind spot.
For the outsourced headcount ratio, the answer was obvious in hindsight. You keep the ratio stable regardless of contractor churn, as long as replacements arrive fast enough. The ratio never signals the cost of those replacements. The formula rewarded a stable number, not a stable workforce.
A KPI is a window
A KPI is a window into your business. The danger is not that the window is dirty. It is that you have pointed it at the wrong wall, and the dashboard is quietly confirming, in green, that everything you chose to look at is fine.
The goal was never to hit the target. The target was supposed to tell you whether you were hitting the goal. When those two things come apart, the report stops being useful and starts being comfortable.
Build the complementary measure. Name the blind spot before someone else finds it.










Comments
Share your take or ask a question below.