Sometimes good things come in small packages. In business-to-business (B2B) sales, big data tools and techniques are digging up veritable truffles in the form of micromarkets.
Companies are tapping and analyzing data from multiple sources to segment their geographic markets into finer slices. Running the right algorithms on a variety of data makes it possible to refocus sales efforts on the market slices with the most potential—often with dramatic results.
Management consulting firm McKinsey & Co. surveyed 120 B2B companies to find how businesses are using data analytics to retool the sales process for micromarkets. McKinsey researchers described opportunity maps in an article in the July/August issue of Harvard Business Review. “The first step in pursuing a micromarket strategy is to create an ‘opportunity map’ of potentially lucrative hotspots,” write Manish Goyal, Maryanne Q. Hancock and Homayoun Hatami. “The map taps internal and external data sets from a variety of sources and uses sophisticated analytics to build a picture of the future opportunity, not the historical reality.”
One of the most successful examples McKinsey uncovered is a chemicals and services company that divided its seven US regions into 70 micromarkets. The company created sales plays for the micromarkets with the greatest potential and realigned its sales force to devote resources to those hotspots. “Within a year the sales growth rate doubled—without any increase in marketing or sales costs,” the authors wrote.
Another example shows the value of correlating internal and external data. A technology company set up real-time monitoring of social media and matched comments about products with internal purchasing data to identify prospects, said Hatami in a webinar about the research. “They were able to pinpoint specific hunting opportunities,” he said. “These leads were so accurate that the company had an 80 percent hit rate, which is a fantastic hit rate.”
A second high-tech company showed the benefits of simply combining and analyzing data tucked away in different corners of the company, including contract databases, customer service, and R&D. The data sets were “sitting in different places and nobody had put them together to think about how they can drive growth,” said Hatami.
The company pulled the data sets into a single database, and hired statisticians to run several types of algorithms. Some algorithms were based on intuition from the sales force and others were based on just raw statistics. The algorithms were aimed at determining which customers in which micro segments the company should target.
The algorithms also determined what would happen if the company were to give a certain value proposition to a customer. The algorithms “described four or five different simple value propositions,” Hatami said. For example, “buy hardware from us, and also buy service from us because then the maintenance cost will be lower,” he said. The algorithms “were able to predict the win rates that they would get from different value propositions”.
This made for effective deployment of sales and marketing resources, said Hatami. “The outcome was really phenomenal.” The company raised its renewals booking—the rate at which it renews service contracts—by 20 percent. This increased revenue by more than $100 million, Hatami said.
Three Keys to Making the Market Analysis Work
Despite the focus on data—where to find it, how to analyze it—the main lesson McKinsey consultants learned from their study was the importance of knowing what to do after a company divides its market into finer slices. There were three key takeaways:
- Companies should bring salespeople in on the analytics process, said Homayoun Hatami. “Have a quality dialogue between the front-line salespeople and their management on what this data all means and what do we do about it.”
- Companies should understand the forces at work in each micromarket and identify which lever to pull, Manish Goyal said during the webcast on the research. Levers include realigning the sales force, increasing marketing, or pursuing partnerships, he said. “The kind of lever that you need to pull really is different based on the local market you’re looking at.”
- Companies should package the analytics output to make it accessible and relevant to salespeople to assure sales force buy-in, said Goyal. “The analytics by themselves are not sufficient,” he said. “You really need to make this real for the frontline. It can’t be presented to them as a black box.”
The Need to Focus on Consumer Behavior
Companies should be wary of taking the micromarkets strategy too far, said Peter Fader, a marketing professor at The Wharton School of the University of Pennsylvania and co-director of the Wharton Customer Analytics Initiative. There are severely diminishing returns once you get past a couple of dozen segments, he said. At this point costs can outweigh the benefits.
Companies should also avoid relying on demographics, said Fader. “I don’t like ZIP code or geography in general as a segmentation criterion,” he said. “There’s a vast heterogeneity within a particular ZIP code. I feel the same way about any kind of demographic.”
The critical variable for segmentation is behavior, said Fader. The goal should be dividing people into groups based on what they’ve done, when they’ve done it and how much of it they’ve done, Fader said. “If we’re just using non-behavioral criteria, then finer slices just create more noise.”
Behavior within a company is also critical. Just as salespeople need to be sold on the analytics process, the analytics team needs to incorporate the sales team’s experience and the company’s business goals into the analytics. “The spark comes when you have a data-oriented person who meets with an intuitive salesperson,” said McKinsey’s Hatami.
This kind of cross-functionality ensures that the data is crunched in the right context and with the right goals. Simply processing lots of data, even if it’s from a variety of internal and external sources, doesn’t guarantee an improvement to the bottom line.
A lot of newer companies that are rich in data have become intellectually lazy, said Fader. “There’s this feeling that if we have more data we don’t need to think about it—we don’t need to form hypotheses or theories about what drives sales because we’ll have everything at our fingertips,” he said. “That’s a really naïve worldview,” Fader said. “It’s more important than ever to be able to separate wheat from chaff and to know which types of measures to focus on even before you start building models.”
Learning from the Direct Mail Industry
Business data science largely originated in industries that have been messing around with data for a long time, even though data was hard to come by, said Fader. Direct marketing firms, for example, “were born and raised around the kinds of data structures that people are obsessing about today,” he said. “They’ve been thinking a lot about combining data sources, picking the right measures and building predictive models at a granular level.”
Another example is packaged goods firms, said Fader. For decades, these companies have thought carefully about squeezing the most value from syndicated data sources like bimonthly Nielsen reports. “That they had such limited data actually made them smarter, because it really forced them to think about the behavioral story underlying the limited data,” Fader said.
Many of the patterns that were handed to us from the direct marketers and the consumer packaged goods firms—repeat purchasing, conversion—are just as relevant today as they were 40 years ago, said Fader. “It’s those old industries that often provide the best practices.”
Eric Smalley is a freelance writer in Boston. He is a regular contributor to Wired.com. Follow him on Twitter at @ericsmalley.