Assumptions are NOT the mother of all f#&K ups

Actually in almost all situations, assumptions are all we’ve got. We just need to create the right experiments to validate (or falsify) them…

I’ve been re-reading “The Lean Startup” by Eric Ries. The book contains very important ideas on how to grow your business using validated learning, innovation accounting, the value and growth hypotheses and the way startups should handle all these things to be successful. While re-reading it, I kept on wondering how some of these things might apply to normal day to day business in a non-startup situation.

For me the crucial part lies in the Build-Measure-Learn feedback loop. Even more so in the reserve order in which it is initiated. To quote Ries:

“Although we write the feedback loop as Build-Measure-Learn because the activities happen in that order, our planning really works in the reverse order. We figure out what we need to learn, use innovation accounting to figure out what we need to measure to know if we are gaining validated learning, and then we figure out what product we need to build to run that experiment and get that measurement.” Page 78

I see that in “mature” companies, teams are creating new products or features in an existing product, based on assumptions made by the business or the teams themselves (and sometimes on customer research). Those assumptions are not that explicit, nobody knows exactly how they are going to measure any outcome and/or prove that the feature actually adds value. Management is frustrated because they “don’t see the value”. Teams are frustrated because management doesn’t understand what they are doing, etc, and so on.

Assumptions are not the problem here. The problem is that we do not test them in a good ‘scientific’ way. We need to Build-Measure-Learn or even better, we need to make the reserve order of that feedback loop very explicit!

  1. Make the assumptions explicit. “I think feature X helps to solve problem Y.” 
  2. Add the (potential) value it will (hopefully) bring (added revenue, time saved, performance improved by x amount). “I think feature X helps to solve problem Y with amount Z.”

Let’s pause here. 

You are still in assumption mode. Of course you want to make sure it is a good assumptions and you can spend a lot of time doing the math, but that feature x will bring value y is an assumption, or put in scientific terms, a hypothesis.

  1. Make the way to measure the outcome explicit “ I think feature X helps to solve problem Y with amount Z and we measure this with Q” (Q can be new or something already in place)
  2. Build the feature to test your assumptions. Nothing more! Just build that which is needed to validate your hypothesis
  3. Measure the outcome
  4. Share the outcome and make a decision if needed (rolling out to all users instead of a subset, make it a bit more pretty if it adds value, etc.)
  5. Learn what to do next (improve further, pivot, something new)

A feature is only done if the full B-M-L loop is fulfilled. For people using product backlog’s with items on them, that means 1-6 should be part of your product backlog item. 7 is the start of a new pbi.

And

“The truth is that none of these activities by itself is of paramount importance. Instead we need to focus our energies in minimising the total time through this feedback loop.” Page 76

For instance: don’t go into analysis paralysis on item 2. Don’t make 4 too fancy at first. Be data driven in 5 and 6 don’t over analyse again. And improve every step along the way to learn better and faster.