Disruptive ideas usually don’t see light of day at large companies. And when they do, it’s not for long.
This is because traditional financial metrics such as NPV, IRR and RoNA are used as measures of success. In addition, the allocation of capital for new ideas hinges on business cases that ask us to forecast indicators such as the target market, the market size, the payback period and aforementioned financial metrics - all factors that can’t be reliably predicted when it comes to disruptive innovation which is inherently uncertain and chaotic.
In the odd case that capital is allocated to a potentially disruptive idea, it’s usually pulled prematurely because the financial metrics or assumptions that made up the crux of our business case were faulty - which is to be expected for almost any truly disruptive idea because if you’re reliably estimating what every variable looks like from day one, you’re either a prophet, have a crystal ball or are just pursuing incremental innovation, the opportunities for which are much more visible and obvious, so too the underlying assumptions.
There are a number of different approaches to corporate innovation programs, including:
The latter is the focus of this post and is representative of an alternative investment pathway for disruptive ideas at large companies.
First up, we need to have a process for defining whether an idea is disruptive, mere incremental innovation or somewhere in between.
So, a quick refresher on McKinsey’s three horizons (or types) of innovation:
Disruptive innovation falls into the Horizon 3 bucket and was characterised by Mr. disruption theory himself, Clayton Christensen, as:
Again, it’s clear to appreciate why large companies don’t support disruptive ideas in their infancy.
If a listed company made $1B last year and analysts are forecasting 5% growth then it needs to find an additional $50M this year or suffer a decline in share price and value.
Finding an extra $50M in small markets with products that aren’t yet good enough for the mainstream is probably not the best place to find the extra coin.
Airbnb might be a US$30B company today, but they made $200 a week in their first year. They struggled so much that they came up with their own line of topical cereal simply to pay their credit card bills.
These are the negative attributes of disruptive innovations in their infancy that prevent large companies from taking an interest until, oftentimes, it’s too late (hello Blockbuster...or, erm, goodbye I guess?).
So what other factors might define disruptive innovation that we could use to determine which investment pathway to funnel an idea into - a traditional business case or our alternative?
The disruptive innovation litmus test, first put forward in Christensen’s The Innovator’s Dilemma, proposes the following solution for new market or low-end disruptive innovations.
New Market Disruption (innovation that creates a new market, eg. iPad)
• Is there a large population of people who historically have not had the money, equipment, or skill to do this thing for themselves, and as a result have gone without it altogether or have needed to pay someone with more expertise to do it for them?• To use the product or service, do customers need to go to an inconvenient, centralized location?
Low-End Disruption (innovation is better or cheaper than what came before it, eg. Netflix)
• Are there customers at the low-end of the market who would be happy to purchase a product with less (but good enough) performance if they could get it at a lower price; and
• Can we create a business model that enables to earn attractive profits at the discount prices required to win the business of the over-served customers at the low end? Once an innovation passes the new market or low-end disruption test, there is still a third critical question to answer affirmatively;
• Is the innovation disruptive to all of the significant incumbent firms in the industry? If it appears to be sustaining to one or more significant players in the industry, then the odds will be stacked in that firm’s favor, and the entrant is unlikely to win.
The typical horizon for low-end disruptive innovations, Source: Clayton Christensen
ADVERTISEMENT
ADVERTISEMENT
Horizon 2, or adjacent innovation, which leverages core assets and combines it with emerging market demand (eg. UBER Eats) can lean towards H1 or H3, depending on how strongly it leverages the core and just how certain and predictable the business model and everything that flows from that is. The greater the uncertainty, the higher the likelihood it should go into the H3 funnel.
If an idea falls under the banner of Horizon 1 innovation, then congratulations - most large companies are already built for this and a business case is an almost perfect vehicle for it.
But if an idea proposed by an employee meets this definition of disruptive innovation, then it can funnel into our alternative approach.
Given the risk profile of disruptive ideas is much higher than that of incremental innovation because of the variables we discussed earlier (remember, small markets and small margins?) the failure rate is very high and as such organisations need to get more accustomed to taking lots of small bets - as venture capitalists do - rather than a few large ones which is often the case with big Horizon 1 programs.
Fortunately, most of said failure rate (96% for startups, 80% for new ideas at large companies) is due to market failure - that is building things nobody wants.
Testing whether or not people want to pay for what we’re proposing, before we spend hundreds of thousands of dollars and countless months designing, developing, marketing and selling the thing, doesn’t need to cost a lot.
As with startups, the risk and uncertainty associated with disruptive innovation is greatest at the start of its lifecycle. We simply don’t have enough information to draw any logical conclusions with any degree of certainty about the problem, the solution, the target customer segment, the marketing and distribution channels, the revenue and pricing model and so on.
The cone of uncertainty, a concept made that’s gained widespread popularity in agile management circles, best encapsulates this in the following:
With greater uncertainty comes greater risk.
And that’s also why taking the time up front to place many small bets to determine what those variables are is the most prudent path of action and the one that renders the most reward in the long-run.
For emerging businesses, startups and new corporate ventures, there is a tendency to jump to conclusions with a product and try to optimise things like marketing channel and ad campaigns, which might bear a little fruit, but when you’re selling a half baked solution to a poorly defined problem, you might steal a few unsuspecting bases, but you won’t hit any home runs.
What if there was a way to test and optimise our assumed problem and solutionbefore investing in subsequent development?
Customer discovery philosophies like the lean startup have opened people’s eyes to this possibility, especially during a time of ubiquitous internet access where building, testing and iterating can be done warp speed.
Below is what Ash Maurya, author of Running Lean and creator of the lean canvas, essentially describes as the new product development lifecycle:
When startups seek investment, they do so based on different factors, from different people, at different stages of their lifecycle.
Note: although today it’s not uncommon for founders to secure significant investment off the back of an idea, especially where they have a previous track record to talk to.
*Refer to notes at end of article for explanation of CPA v LTV and other metrics.
A venture capitalists’ primary job is to invest in early stage disruptive innovation and so they cast a wide net given the risk profile of a startup, only striking gold, on average, once out of every ten times, with a couple of moderate wins to boot.
Now if VCs have a strike rate of about 25%, then what chance does a large listed company, with a culture characterised by certainty, appeasing short-term shareholder demands, risk mitigation by analysis, change advisory boards, steering committees and getting things ‘perfect’ before engaging customers have of getting it right every time?
Yet, our current systems are setup to do, or expect rather, exactly that outcome.
What Gets Measured Gets Managed
Recall from our startup investment example above that each stage in the product development lifecycle has different metrics used to determine investment.
So we need to define similar metrics for our investment into disruptive ideas at a large company.
Effective metrics are characterised by the following:
Well, if we take the startup investment model above and map it to the product development lifecycle, we might land on something like this to back disruptive innovation at a large company.
*Refer to notes at end of article for explanation of CPA v LTV and other metrics.
Experimentation doesn’t need to cost much.
You’ll note the tiny funding amounts of as little as $500 for problem validation (depending on the idea, this could in fact be much less than $500)
That’s because with nominal amounts like these, some smarts and an internet connection we can test lots of our assumptions fast and generate enough data to determine whether or not we should receive a subsequent capital injection.
Remember, we’re looking to test problem and solution in the early stages of an idea, which doesn’t require functional, working software or hardware.
Make no mistake, this is way lighter than your traditional ‘proof of concept’ which is still all about some kind of functional offering that takes months to get out of the building.
Still not sure? Check out the case study at the end of this post for an example of this in action or read this article on how GE managed to decrease time to market for new ideas by 80% using a similar approach.
You might also notice some alignment between this model and the following:
More and more people in the corporate innovation space are wisening up to the fact that design thinking, while incredibly powerful when it comes to testing for problem solution fit, is simply not and never will be enough when it comes to testing for business models and product market fit.
With anything, there are going to be pitfalls such as incorrectly defining metrics, focusing on one metric at the expense of all others (eg. acquisition versus retention), false positives, not collecting enough data and of course, the 110 cognitive biases that plague our decision making (“yoy!”).
Don’t pull the plug early!
Remember, typical VC’s expect a return on their investment in 7-10 years so if you’re really dabbling in disruptive innovation, can you really expect a massive pay-off in six months?
The following is a sanitised world case study.
Introducing Mary.
She works for a large life insurance company.
Her idea: a holistic health management app that satisfies the disruptive innovation litmus test we provided earlier
She’s received $1,000 seed funding to explore the problem and solution.
She’s decided to use a combination of the following to test the problem and solution:
Method: lean startup method
Tools: lean canvas, online ads and a landing
Metrics: acquisition rate and activation rate
For simplicity’s sake, the canvas above focuses only on the starting blocks - problem, solution and customer segment.
We can see that Mary has made a number of assumptions around the problem and customer segment that need to be tested.
She is first going to test the assumption that people don’t have time to take care of themselves.
She will test this across the different customer segments identified to see if it resonates with any particular group more than others.
What we want to measure early on in the innovation lifecycle is learnings.In order to validate that the assumptions underpinning an idea or business model are flawed or valid, we must experiment, learn and iterate relentlessly in order to move closer to product market fit.
You might use Pirate Metrics (because, “aarrr!”) to map out metrics at each stage of the funnel below. Initially you might simply want to test under acquisition ‘percentage of target customer clicks on Facebook ad for problem assumption is above X%’.
Example for an online store
Got acquisition nailed but find people dropping like flies once they land on your offer (eg. website), then it’s time to optimise for activation.
If you’re failing short of your target metric, then perhaps tweak the problem definition, the target customer segment or even the ad imagery. This is all part of the rapid experimentation and iteration that underpins any ‘overnight success’
Testing Acquisition in practice with a Facebook Ad
Target customer segment: Hospitality workers
Problem tested: People have no time to take care of themselves
Metric: % of targeted people who click on Learn More in ad
In this case she’s spent between $68 and $337 testing assumptions across the respective customer segments, averaging about $10 per 1,000 impressions (or ad views) and an average of $1.24 per click.
Her reach exceeded 140,000 impressions and generated almost 1,400 site visits in under two weeks.
Already we can see that the problem appears to resonate more with blue collar workers who clicked on the ad at a rate of 3.17% versus more more than 0.54 - 1.27% for the other segments.
Note: these are actual results from a similar campaign
Testing Activation in practice with a Landing Page
Target customer segment: Hospitality workers
Solution tested: online stress management and resilience training
Metric: % of visitors from ad who click on Get Early Access
Note the targeted imagery.
Again, the blue collar customer segment comes out way in front in terms of receptiveness towards the proposed solution with 7.4% of website visitors from this customer segment signing up for further information. It’s worth noting that the others also performed reasonably well given that average conversion rates for successful online platforms hovers at around 2-3%.
Step five: she updates her ads and landing page to reflect updated assumptions based on learnings
At this point, she might again make some tweaks and perform some further testing.
She might reach out to the 25 blue collar users who signed up for more information and engage them with problem and solution interviews to better understand their problem and hone in on a solution that would truly resonate.
She might start exploring elements of the business model with them too such as pricing model, which she could subsequently test online using the approach we’ve provided above.
Whatever the case, Mary is racing ahead, learning a lot about the proposed concept and its associated customer segments, problem and solution and can present tangible, measurable, real data back to stakeholders.
She might be ready to ask for additional funding to build out a minimum viable product and test the business model, which might require only an additional $5,000 or so with the help of clone scripts and some offshore support.
When I built my first startup, I did so using clone scripts and a developer I found on Freelancer.com. It cost me little more than $2,500 to build a functional minimum viable product and secure a subsequent round of $156,000 in funding.
Again, experimentation doesn’t need to be expensive and is much cheaper than not experimenting and racing ahead anyway or simply not exploring disruptive innovation.
However, Mary might also suggest a different path based on unforeseen learnings (a pivot) or decide that the data doesn’t support going any further.
In the latter event, it’s important to optimise that return on failure for the organisation to share learnings and decrease the risk of duplication of effort.
For more on the ROF framework, click here.
Not only does this approach empower our people to take many small bets across lots of potentially disruptive ideas and ensure we are preparing our organisations to tackle growing volatility and ambiguity, it also mitigates the risk of over-investing in the wrong idea by forcing us to validate problem, solution and market appetite early.
Remember, it’s always much more expensive not to experiment than it is to experiment.
The flow on benefits are numerous but above all, empowered people doing purposeful work, is going to translate to a much happier, engaged and productive workforce.
And while nothing is certain, by empowering your organisation to take many small bets, you’ll be far more likely to succeed.
---
The Net Promoter Score is an index ranging from -100 to 100 that measures the willingness of customers to recommend a company's products or services to others.
How it works: Ask your customers “how likely are you on a scale of 1-10 to recommend this to a friend?”
1-6 = detractor
7-8 = neutral
9-10 = promoter
Simply subtract the percentage of respondents who were detractors from the promoters (neutrals are ignored as they are essentially fence-sitting responses).
If all of the customers are detractors then NPS would be -100.
If all of the customers were promoters then NPS would be 100.
Anything above zero is technically good while anything above 50 is excellent.
CPA (cost per acquisition) = the average cost of acquiring a customer
LTV (lifetime value) = the average lifetime value of a customer
Of course if LTV is higher than CPA then you’re onto something, if not, then…
ARPA is a measure of the revenue generated per account, typically per year or month.
MRR is income that a company can reliably anticipate every 30 days
The WorkFlow podcast is hosted by Steve Glaveski with a mission to help you unlock your potential to do more great work in far less time, whether you're working as part of a team or flying solo, and to set you up for a richer life.
To help you avoid stepping into these all too common pitfalls, we’ve reflected on our five years as an organization working on corporate innovation programs across the globe, and have prepared 100 DOs and DON’Ts.
ADVERTISEMENT
ADVERTISEMENT