How product experts create consistent value

The Product Experts from the Split Decisions Conference take us to school

Jack Moore
The Startup

--

In a graciously appointed conference room in the JW Marriott, just off Union Square in San Francisco, some of the greatest mad scientists in the product universe gathered to discuss how to consistently build awesome products.

The Decisions Conference, sponsored by Split. Punctuated by a steady stream of music from the Top Gun soundtrack, speakers with expertise in product management, data science, and organizational leadership shared best practices for building organizations that quickly and consistently churn out meaningful products that users love.

The discussions of the day ultimately centered around one key challenge, as was well put by the introductory keynote speaker, Adil Aijaz — CEO of Split Software:

“What matters is not the speed of delivering releases, but the speed of delivering value”

We need to move beyond releasing quickly and releasing often, and understand the importance of ascribing value to those releases. This requires us knowing what metrics are important, understanding how to track those metrics, and having the discipline to only release what positively affects those metrics.

These key points lined up with the tracks of the conference, and many of the key points were repeated and emphasized by speaker after speaker.

They seek out outputs, rather than outcomes

In order to deliver value to our customers and users, we must understand the difference between the products we build and the outcomes that those products achieve.

Outputs are built. They’re the things that a team builds in an effort to make customers happy.

Outcomes are achieved. They’re the results of those products that were shipped, as well as the products that didn’t ship, and how our users use those products.

If our Outputs don’t produce quality Outcomes, then all we’ve done is build a product that isn’t useful. We measure outputs through real analysis of how our features are used.

For a moment, imagine the greatest book you’ve ever read. It brought you value, but was it the book itself that brought that value? If the book had been printed in a language you couldn’t read, it’s still the same book with the same message, but it suddenly becomes less valuable to you, the user.

The distinction here is important because you can never know for sure what the impact of any output is until you look. Nothing is valuable in a vacuum. Value is contingent on the contextual use of a given output, how it changes habits and impacts lives, that determines whether a feature is valuable or not.

They experiment to reduce their “Feature Debt”

In order to achieve the outcomes that we want, we need to understand how to ascribe value to the features we’re putting out.

It’s easy to talk about achieving outcomes, and it’s another step to understand the investment required to actually measure outcomes at scale.

According to Ronny Kohavi, VP of Experimentation and Analysis at Microsoft, great organizations realize that most product improvements do not positively impact their intended metric. Kohavi leads the experimentation team at Bing, where he’s seen an equal distribution of features that had positive, neutral, and negative outcomes, only the first of which get shipped.

This is how you avoid Feature Debt

Whereas technical debt refers to implementation workarounds that reduce the overall quality of your code base, feature debt refers to those things that you implement, regardless of quality, that have impacts that are not understood.

We contribute to Feature Debt every time we ship features that we “know” are going to be valuable to users without measuring how users respond. Similarly, when we build something for a user whose boss’s boss, the HiPPO, insists will be valuable without validating that value, that’s feature debt too.

Side note: I was excited to hear that the phenomenon “Highest Paid Person in the Office,” or HiPPO design was actually coined by Kohavi. What a legend!

If our users aren’t getting value from our products, then we have not succeeded as product teams. Therefore, any product that we cannot prove provides value to our users contribute to our Feature Debt.

Ultimately, it doesn’t matter how genius we think the product we’re shipping is. Our users decide how good our product is, and they show their support through their actions.

By ensuring you ship only the features that have proven positive metric impact, you make sure to keep a lean code base that supports only those workflows that have proven, well-defined value.

They only call a feature “done” when they understand its impact

Nobody sets out to start a feature factory. In Spotify’s famous engineering culture presentation, you can see a great example of the sort of vision tech firms typically have for how their teams contribute towards a larger vision of success.

Despite this vision, many firms find themselves stuck in either a land of low-alignment or low-autonomy, or both.

Teams that have low autonomy often have a set of leaders that agree on the goals they think are important, but have decided that without proof that any given idea worked, ideas about what should (and should not) be built should ultimately filter through leadership.

Teams with low alignment struggle to understand what results produce good business results, often because they can’t measure them. Everyone reports their excitement with the various things they’re working on, but nobody knows which results are good results.

So, if everyone is trying to figure out how to allow for their teams to build towards a unified vision of success, so why is it that so many have failed to find this nirvana of alignment and autonomy?

Simply put, they don’t have the processes, control, and discipline to only ship those products they know are valuable.

Towards solving this problem, John Cutler — Product Evangilist at Amplitude, proposed that teams need to define a feature as being “done” only when its impact on business metrics have been clearly understood, documented, and otherwise communicated.

Typical definitions of “done” for a feature often fall somewhere between “ready to ship” and “shipped,” with monitoring serving as a more holistic effort on the part of product teams, rather than a feature-specific one.

We rack up feature debt when we ship features that are not monitored, and again when we roll out a feature without testing it to figure out how it is an improvement or a detriment from the world that existed last release.

Ultimately, product experts recognize that what gets measured gets managed, and unproven features are a liability to any product team. Teams that are dedicated to ascribing outcomes to their outputs have the best opportunity to put out simple, elegant products that deliver consistent value to their users.

Thanks for reading! I’m Jack Moore, and I love writing about product management.

--

--

Jack Moore
The Startup

A product person looking to figure out all the ways software can improve peoples’ lives