Crater Lake National Park, Oregon, United States

The spreadsheet showed customer acquisition cost (CAC) at forty-seven dollars. Then someone changed how we allocated event marketing spend, and suddenly it was seventy-three. Then we tried a blended attribution model, and it landed somewhere in between. Three methods, three numbers, all defensible. We spent two hours in that conference room arguing about which was “right.”

Nobody could win that argument because the question itself was wrong. We were choosing a frame, not measuring truth.

This framing problem happened again with a marketing campaign that achieved record-low CAC from an email blast. The team celebrated. Leaders nodded. We had proof of efficiency, quickly documented in the quarterly review. What nobody mentioned in that moment: the email list had been built by an expensive direct-mail effort months earlier. So while the number was technically accurate in that frame, the story it told was incomplete.

Numbers don’t lie, exactly. They crop the picture to what fits inside the boundary you’ve drawn. And the boundary is a choice.

The most useful thing I’ve learned about metrics came from watching a company almost repeat itself. A new VP joined and proposed a campaign strategy. It sounded sharp: well-reasoned and backed by current market thinking. Yet someone in the room pulled up an old dashboard. That exact approach had been tried eighteen months earlier. It hadn’t worked. The data was right there, tracked consistently enough that the pattern was unmistakable.

Without that record, we would have repeated the experiment. With it, we could skip ahead to the next question. The metric wasn’t perfect. It couldn’t explain why the campaign had failed or whether the new market would be different. But it preserved institutional memory when the people who carried the original context had moved on.

That’s what consistency gives you. A number tracked the same way for eighteen months tells you more than a “perfect” metric constantly redefined. The trend becomes the memory, more reliable than anyone’s recollection of what happened or why.

But metrics turn dangerous when you mistake the measurement for the thing itself. Lifetime value recast as guaranteed future cash instead of an educated guess. Growth rate trumpeted while unit economics quietly deteriorate. Gross margin inching upward only because expenses were cut, resurfacing later as churn. The number goes up. The system gets worse.

Some things resist quantification entirely, and forcing numbers onto them usually makes them less than they could be. Culture surveys that measure engagement without predicting retention. Satisfaction scores that show a button got clicked, not whether someone would recommend you to a colleague. Early signs of burnout, frustration that hasn’t calcified into resignation, the quality of silence in a meeting when someone proposes a questionable plan.

Better to admit what can’t be reduced. Build other ways of sensing: talk to customers, walk the floor, notice who has stopped speaking up. Not everything needs to be dashboarded.

I’ve come to think of metrics less as scorecards and more as stethoscopes: a way to hear what’s happening inside systems too complex to see directly. They won’t tell you what to do. They can’t diagnose root causes or guarantee outcomes. What they can do is hold steady long enough for judgment to form, give you something to return to when context shifts, preserve memory when people leave and markets change.

Good metrics make problems visible. They don’t solve them. But visibility is its own kind of infrastructure. The number on the wall doesn't tell you where to go. It reminds you where you've been. So you don't end up there again by accident.

Related reading
Latest entries

Like this? Subscribe via email here.