As a PM, You Probably Shouldn’t be as Confident as You Are (The Risk/Confidence Matrix)
In the realm of product management, creating remarkable and impactful products requires a profound understanding of (and empathy for) the end customer and their goals/problems.
A while ago, I wrote a post titled “Begin With the End [Customer] in Mind” where I reframed Steven Covey’s 2nd habit “Begin with the end in mind”. And it’s obvious that Product Managers (or those performing that function in their company, like the CEO of a startup) need to be deeply empathetic with their customers. But there’s always so much to do, so how do you know when do you need to carve out the time to work deepening your customer understanding and customer empathy, how do you do it, and how much time should you spend on it?
I believe that your artifacts such as personas, empathy maps, customer journeys, etc. are excellent ways to create and communicate customer understanding and develop customer empathy. And your needs for these will evolve over time and depend on your problem space, your confidence about your existing understanding of your customers and your hypotheses about their behavior, and how risky it is if the hypotheses you have are wrong. As I mentioned in my article about the Riskiest Assumption Testing (I smell a RAT, or why you’re doing MVPs wrong), and building on work from Scott Sehlhorst and others, that risk is the thing to focus on. And layering on to that (and why you should test your riskiest assumptions) is your confidence level in the assumptions underlying the risk.
So let’s talk about confidence for a bit. Most people put a lot of faith in domain or subject-matter “experts,” and feel confident in the the things they say, especially when they say them with confidence. You may feel that you are a subject matter expert yourself. But experts are wrong all the time. In fact, according to research that Eric Barker references in his article “How accurate are the experts on TV,” they’re only slightly better than random guessing, but not as accurate as statistical models, and they’re more likely to be right when making predictions about things outside their field of “expertise.” (The study is summarized in the book Everything Is Obvious* Once You Know the Answer.). Bringing some humility to the table might be just what is needed.
So in the land of confidence, I want to emphasize that the expert opinion (aka “domain knowledge”) is actually quite low in terms of reasons to feel confident about you hypotheses, and the best way to be confident is to see a thing actually happen. Experimental data is always the best — I thought that if I did X, Y would happen, and Y really did happen. There are varying degrees of confidence between these two (domain expert opinion and experimental test results), of course, as evidenced by the Confidence Scale (H/T Scott Sehlhorst):
Using this as a framework to guide your relative confidence in your assumptions helps a lot. It allows you to see where you are making risky assumptions and to dig in and do something to move up the confidence ladder if necessary. But how do you know if it is necessary to do something to increase your confidence here?
The second part of this is risk. How dangerous is it if you are wrong in your hypotheses? What are the consequences of being wrong? Generally the ways you can be wrong can be summarized as either “Wrong Problem” or “Wrong Solution.” And you want to find out before you incur the costs of being wrong — what is the type of risk you are facing? The types of things that can be “risks” can vary with the hypothesis you are making:
- Financial / effort— how much does it cost to build this thing?
- Reputation — if we are wrong, is it a big PR hit for us?
- Execution — how hard is it to build? Can we do this well?
- Technology — Does the tech exist to build this and do we have access to it?
- Market — what if customers just don’t want this (or won’t want it by the time you can deliver)?
- Competitive — is this really a differentiator? Can our competition easily copy it?
- Legal/Compliance — are there privacy concerns or legal concerns here?
You can try to assess the scale of the risk and plot it on a graph with the confidence like so (a Risk/Confidence Matrix, you could call it):
And based on how risky it is if you’re wrong, you can then decide what to do. Do you “Pay-to-Learn” by running experiments, building prototypes, or doing some other forms of more extensive research to enable you to move up the confidence scale (aka Riskiest Assumption Tests, as I mentioned before)? Or is the risk low enough that you just ship it and see what happens (which will also definitely move you up the confidence scale, since you’ll see real world behavior and all.)
What you decide to do likely has a lot to do with the type and size of the risk, as well as your own (and your company’s) risk tolerance. I can’t put forth a simple rule. This is all situationally dependent but the key is to think about it, and consider:
a) that you might be wrong
b) what would happen if you were wrong