by Shayna Stewart
In the digital industry we all aim towards the aspirational, yet colloquial goal of innovation. Yet more and more, true innovation is harder to come by. We believe this is due to the way roadmaps are prioritized. The frameworks that product leaders use for prioritization tend to overemphasize the manipulation of features that are already in place. The metrics, Level of Effort (LOE), Scale, and Customer Value, used in the framework do not favor injecting any new concepts.
It favors optimizing existing concepts.
Recently when working with a client to prioritize ideas within their roadmap, we created what is called the Innovation Index. In this case, our client and team brainstormed on all of the possible ideas for a second phase of a recent app that was built.
The problem we ran into ranking all of the ideas was a lack of data to support one feature over another as the first iteration of the app was yet to be launched. Therefore no data had been captured and we were too early in the process of vetting more ideas to gather consumer feedback. From our need to evaluate ideas objectively in lieu of usable data, the Innovation Index was born.
To create this index, we first did some research toward an objective definition of innovation. We landed on two important pillars:
From there, we took the list of ideas generated by our client and our team and researched if any of these ideas already existed. If the idea didn’t exist, we gave the idea a score of 5. If it did exist we ranked the idea with a score of 1-4, heuristically representing market saturation. Therefore, 5 was a completely distinct idea and 1 was an idea that has a high market saturation.
Secondly, we ranked the ideas on if the proposal was a more efficient way of solving the problem. If the idea was ranked as a 5 in the market saturation scale (meaning a completely distinct idea), it automatically got a score of 5 for being more efficient. Otherwise, we evaluated the ideas if they were more efficient approach to solving a problem that has already been solved by someone else.
That’s it, the Innovation Index is made up of two scales: Market Saturation & Efficiency. These concepts mapped back to the pillars of innovation, which helped focus our next steps on differentiation as opposed to directly competing with existing products. In the end this exercise prioritized our product planning to helping an underserved audience of about 18.6M people.
After using the Innovation Index in a roadmap with no data, I started to think about how it would apply to a roadmap with data already in place. I discovered that without including the Innovation Index, the metrics used to prioritize a roadmap (LOE, Scale, and Consumer Value) were actually working against innovative ideas. Here we dive into just how this plays out:
Level of Effort
LOE asks how easy is it to bring an idea to market. The reasons for an ‘easy’ LOE estimate is slightly different for each team, but both result in prioritization of ideas that already exist.
If the idea is deemed easy by the technology team, that means they either have already done this before or that there is significant documentation already published. It also is indicative that all data elements are readily accessible. Typically, data elements that are readily accessible are already in use -- i.e. the idea is an optimization of an existing feature.
If the easy LOE estimate comes from the design team, it means that a proposed feature will likely have a small impact to the ecosystem. Designers spend less time when they do not have to think through the way that users move through the experience. When they do not have to think through the experience, that means they are manipulating or adding something to an existing page. This type of one-dimensional change is typically not symptomatic of building a new, innovative idea. Therefore an LOE of Easy and even Medium are deleterious to innovation.
Scale measures the potential number of users reached by the idea. If it’s high, then it gets prioritized over a niche solution. However, when you think about some of the most innovative brands today like Amazon, PayPal, Etsy, and Tesla, they all started by servicing niche markets. Often when innovative technologies and ideas are first created, the full breadth of implicated use cases are still unknown. In the case of PayPal, they worked from the insight that it was very challenging for auction houses (a small but extremely active part of Ebay’s user base) to collect mobile payments. PayPal was born from this insight. Ten years later, it’s rare that you find a retailer that does not support purchases through PayPal.
Augmented reality is another recent technology that hasn’t benefited from publicly scaled use cases yet, but we see companies like Google making significant investments. There’s value in testing early and learning fast, if you encounter an idea that may be small scale but is potentially innovative. I would recommend prioritizing it and position it to leadership as a learning opportunity for the team.
This metric is our most vague as it has the potential to be defined differently across multiple Indexes or use cases. We can’t completely rule out that this metric in some cases can be aligned to innovation, but in most cases I see that it is not.
Typically it is discovered through user validation or market research. CV often goes against innovation for two reasons:
The first - you may not be talking to the right group of people. In a recent study, we identified a target persona which clearly did not want certain innovative ideas because they weren’t geared towards that particular persona. This is also a factor when you have extremely small sample sizes. The users you are talking to just may not see the value of the idea. Second, I see studies that outwardly ask users what they want, what could be improved upon. This is important to do in order to find major usability issues, but it’s not the metric that will get you to focusing on innovation. It’s best to use a metric that prioritizes building a prototype of innovative idea that solves a problem that the users didn’t know they had. Then, subsequently present the prototype to potential users (pending you feel confident in your sample) to get feedback on usability. This methodology is better than asking users what they want as a means to prioritizing what you build.
Steve Jobs epitomized this very thought and opined -