Note7 June 2023

In which exploration requires a permit, and the permit is clarity.

Earning the Right to Explore

The last strategy session ended the same way the last three had: a whiteboard full of arrows, half the team excited, half already exhausted. Someone had drawn a quadrant. Someone else had added sticky notes, color-coded by urgency, which meant we now had a taxonomy of our confusion. Ninety minutes later, I watched a group invent problems to avoid the ones already in the room.

We had no shortage of ideas. What we didn't have: any real sense of whether the last new thing was working. Or the one before that. Someone floated a new data pipeline. Someone else pitched a rewrite of the tagging schema. Both ideas were probably fine. Both would require trusting numbers that nobody in the room fully trusted, built on definitions that had drifted since whoever originally wrote them left the company two roles ago.

I sat there nodding along, knowing all of this, and said nothing useful. I was as excited by the new stuff as anyone. Going back to reconcile three dashboards that each tell a slightly different story about the same quarter is the kind of work that makes you want to pitch a rewrite of the tagging schema instead. That's how the loop starts: exploration that can't prove itself, followed by more exploration to compensate.

Standards get a terrible reputation, partly because they sound like bureaucracy and partly because the people who advocate loudest for them are sometimes using rigor as a hiding place. I've been that person. There's a specific comfort in tightening a naming convention when the real question is whether the product direction makes sense. You can spend months improving data hygiene while the market shifts underneath you, and it feels productive the entire time, because it is productive. Just not in the way that matters most.

The more common failure, though, runs in the other direction. A new initiative gets a kickoff meeting, a Slack channel, maybe a deck. Standards get a Confluence page nobody bookmarks and a quarterly audit that everyone finds a reason to skip.

I once worked on a team that ran what felt like a perpetual pilot program. Every quarter brought a new framework, a new vendor evaluation, a new proof of concept. The team was smart and genuinely curious, which made the pattern harder to see. Each individual experiment was defensible. The aggregate effect was chaos: four competing data sources, three half-built integrations, and a Jira board so layered with abandoned workstreams that it had become a kind of archaeological site. You could carbon-date the team's strategic pivots by scrolling back through the backlog.

What they lacked: shared definitions for the metrics they kept trying to reinvent. "Engagement" meant one thing in the Tuesday standup and something slightly different in the Friday review, and nobody had reconciled the gap because nobody owned it. When a number looked wrong, there was no one specific to ask whether it was actually wrong or just measured differently than you expected. The scaffolding that makes progress legible.

Without that scaffolding, everything takes longer. Data can't be trusted. Fixes don't stick. Onboarding a new teammate feels like teaching a dialect that only exists inside your own head. Even the simplest experiments collapse under ambiguity, because you can't tell whether the experiment failed or whether the inputs were wrong from the start.

Standards ↔ Exploration

Not rules for their own sake — scaffolding that makes innovation legible

Standards

Make work legible
  • Shared naming & definitions
  • Single source of truth
  • Comparable metrics over time
Reduce friction
  • Clear ownership & handoffs
  • Onboarding that scales
  • Fewer one-off exceptions
Stabilize systems
  • Predictable inputs/outputs
  • Faster debugging & recovery
  • Trustworthy data surfaces

Exploration

Exploring a New Approach
  • Figure out if it’s worth going forward or not
Benefits
  • Long-term cost benefits
  • Step-change improvement
Trade-offs
  • Increased complexity
  • More flexibility
Potential
  • Innovation potential
  • Competitive edge

Outcomes: faster learning, fewer surprises, higher leverage.

The diagram flattens something that feels more tangled in practice. When something new arrives (a tool, a framework, an idea someone brought back from a conference), the instinct is to evaluate it on its own merits. Could this solve the thing we've been struggling with? The answer is almost always yes, theoretically, if you squint. New approaches are appealing precisely because they haven't yet been tested against your specific mess.

The harder question is whether your team can evaluate the new thing clearly, given the state of what you already have. If your existing systems are illegible, every assessment of the new thing inherits that illegibility. You end up comparing a shiny unknown against a poorly understood known, which is how teams adopt tools that solve problems they don't actually have while the real problems persist underneath.

And when teams do choose to explore, they habitually skip the part that matters most: clear go/no-go criteria, defined before they start. In my experience, "deeply worth it" gets defined retroactively, once the team has already committed, which is less a decision than a rationalization.

I don't think the answer is always standardize first. Sometimes the existing system is so fundamentally broken that tightening the bolts on a collapsing structure is its own waste of time. I've seen that, too: teams polishing processes for a product that the market has already moved past, deriving a quiet satisfaction from the internal consistency of something nobody outside the building cares about anymore.

But that failure mode is rarer than the other one. More often, the team that can't get its standards right is the same team that keeps proposing new things, because novelty feels like forward motion even when it isn't. The whiteboard stays full. The energy stays high. The underlying friction stays exactly where it was.

If you want to build something new, you have to be honest about what you already have. Standardize enough that the next step forward isn't guesswork. Enough that when the experiment doesn't work, you can tell whether it failed on its own terms or whether it never had a fair chance against the accumulated ambiguity of everything that came before.

Otherwise, somewhere in all of this, the new person starts their second week still trying to figure out which dashboard to believe.


Reply

I’d welcome your thoughts on this essay. Send me a note →

Related reading
Latest entries