Johannes Habel warns against following your hunches, which may be both wrong and costly.
When we believe that a particular theory or action is the correct way to think or act, we tend to come up with examples to support them. This tendency is known as confirmation bias. Once we move this way, we often like to continue moving, even when we start having doubts and begin wondering where we might end up. This tendency is known as an escalation of commitment. In business, either tendency can turn out to be costly. On the other hand, even a positive, 100-percent-sure gut feeling can prove to be costly as well.
Uncertainties and the wrong decisions made to solve them can happen in all kinds of business contexts: Would a sales force be more productive if its fixed income level was changed from 30 to 70 percent? Would adding a new product category attract more customers and raise overall profit? Would the digitalization of both baggage check-in and the retrieval of lost baggage lead to customer resentment and make them switch airlines? Or would the opposite be true, with customers considering digitalization to be a sign that an airline is staying up to date?
In each case, those in charge may have hunches, or even stronger feelings, that is, the irrevocable notion that what they believe is right. If it works, no one will be able to plausibly say why; if it doesn’t, same story.
Take the case of Ron Johnson, who left Apple to become the CEO of JCPenney. Instead of staying with JCPenney’s traditional coupons and clearance racks, he and his team innovated. They filled the space with brand names and boutiques. In addition, technology replaced cashiers, cash registers, and checkout counters. It did not take long until sales nosedived, losses rocketed sky-high, and Ron Johnson left JCPenney.
Before this had happened, testing should have been done. Testing entails implementing something new for a random group of customers, or employees, or conducting tests in stores, and not implementing the change for a control group. By comparing results between the test group and the control group, we can, in fact, discover what works and what might just be our fantasy of things that might work.
Therefore, testing is useful – but not in all cases. Testing makes sense when the results are measurable, not in cases of large-scale strategies. It will not tell you whether you should enter an emerging market nor help you with other major strategy decisions. Instead, it is a method to check the effects of innovations or changes regarding, for example, how your customers will react to a new product, new taste, or new technology, or what the reaction will be if you include high-end products in your low-budget portfolio.
Depending on the industry and the results you are after, small experiments might even do. Yahoo! typically runs 20 or so experiments at any one time, manipulating things such as colors, placement of advertisements, and location of text and buttons. These little experiments can have big effects, like the one showing that simply moving the search box from the side to the center of the home page increased the click-through rate enough to bring in about $20 million more in advertising revenue a year.