There's nothing more exciting than trying something new. You feel the opening-night jitters; a mixture of fear and anticipation that flutters the heart and the stomach alike. In theatre, opening night is the culmination of months of hard work designed to reduce the risk of that first official performance for the public. Casting, rehearsals, understudies, set design, dress rehearsals, and finally private performances help ensure that the only question remaining on opening night is viability—will buzz and word-of-mouth keep the paying public coming for a long theatrical run. Opening night is a pilot; all the important experiments around feasibility and desirability have already been run.
Such is your project. I hear a lot of people using the word "experiment", when in fact, they are really piloting. Piloting is a timeless tactic of corporate America. Its aim is usually "proof-of concept". The typical metric for a pilot is customer uptake. Sometimes internal systems' ability to manage the new product is also measured. I can't imagine anything riskier. Why? Because you are rehearsing in front of a paying audience. If you fail, it will be headline news tomorrow. And it's expensive, and you don't learn a lot if you are measuring the complex behavior of purchase.
Don't get me wrong. There's a time for pilots, but it's only after you have proven a compelling fit of customer-problem-solution. And that's rarely at the beginning. Start with the desirability piece of the puzzle. Is there an early-adopter customer out there that is already trying to solve for the problem you have identified? What can you learn from those customers? Could you adapt their solution and offer it to a small group of similar customers? Then, do they use it? Why or why not?
Experiments vs. Pilots
Experiments
- Test specific behaviors or assumptions
- Small scale, low cost
- Quick to implement
- Focused on learning and validation
- Easy to pivot based on results
- Low visibility if they fail
Pilots
- Test entire solutions or systems
- Larger scale, higher cost
- Substantial preparation required
- Focused on proving the concept works
- More difficult to change course
- High visibility and stakeholder attention
Next, adjust your prototype and offer to a larger group of people—say 25 or so. What can you learn about desirability in this setting? What you learn about feasibility; it is even possible to offer this at scale? When you have those answers it comes time to start thinking about viability. Why think about it now? Because what you learned about desirability may be so compelling that the answer to "what should we be involved in" may change with new information. Likewise, feasibility: if you can show a jaw-dropping market opportunity, resources can magically appear to make the seemingly impossible happen.
The Discount Program Example
Let's say you are going to test a customer discount program with your customers. You ran some surveys and some people said that such a program would be helpful.
Do you: a) contract with a vendor and send an invite to a state's worth of customers and inform retailers of the program, or b) talk to a group of high-value customers about what they shop for and how they do it?
Correct, (b). Remember, before Angie's list was a website, it was a stapled list that Angie schlepped all over town. She learned a ton by doing it, and it lit the path to subsequent success in the scaled online world.
Brainstorm the biggest risky assumptions facing your project. Think in terms of desirability, feasibility, and viability. Rank them. Hint: desirability risks are usually first. Then feasibility. Then viability. Now test your top risk in the simplest way possible. Remember, these experiments aren't testing your solution, it's testing the behaviors that must be present for your customers to derive value from a solution. That's why pilots aren't right at this time; measuring sales doesn't tell you a thing about the why or why not of the purchase.
Looking at the Discount Program, after brainstorming risky assumptions, you came up with a couple of doozies. First, "customer will accept a discount/coupon". Second, "customer will use the coupon to buy something".
Now experiment. Send out your coupons. Track email open rates. Now measure redemptions. Talk to people that didn't open the email. Talk to people that didn't redeem. Talk to people that did redeem. Learn everything that you can. Then re-evaluate the offer. Discuss viability in the context of the customer response. Given the customer response, should you be doing this? Is it worth the reputational risks? Could your company be good at it? Is it a good use of finite company resources?
This is just the beginning. But there is an ending. One of two things will happen. You'll be unable to generate compelling evidence to move forward, or you will get good evidence. If customers don't want it, that's great—do something else with those resources! The only failures occur when we charge ahead without evidence. The important thing is that you took the risk out of your offer long before the bright lights of opening night.
Ready to design effective experiments that reduce risk in your innovation projects?
Schedule a Coaching Session