Too often companies take a very inward looking view of what criteria their customers use to define satisfaction as well as compare you to your competition. In addition, when those same companies serve different markets they sometimes fall into the trap of assuming the drivers of customer satisfaction across those markets are very similar if not identical.
This lack of clarity affects not only customer retention and loyalty but also how the company invests capital, deploys technologies, structures itself, measures performance and the skills it hires.
Let me share 3 real life examples with you:
For yours the first company took pride in its ability to quickly provide clients with proposals (RFPs). Based on its labor costs and the volume of RFPs generated each year the RFP process was costing them over $650,000 which represented the equivalent of 1% of their revenues.
When they lost a bid they worked harder and faster to generate the next one. “Be faster than the competition at responding” had become their driving mantra.
Upon redesigning their customer feedback process they applied a method called Critical to Quality (CTQ). They listed 12 factors they believed to be key to customer satisfaction. In their feedback survey they also allowed customer to add other items not on the list. Next, the customers were asked to assign a level of importance to each factor and then rate the companies performance in each factor.
Guess what? RFP turnaround time was ranked dead last. The two most important factors were Accuracy of first quote and Effective Project Management after the sale. On both of these the company received the lowest ratings.
As a result the company redesigned its RFP process to “get it right the first time”, focused on Project Management training and candidate selection and hired a Director, Project Management with great industry experience.
The second company made core high technology products that were similar in nature but used in 3 different markets: automotive, aerospace and medical devices. Their traditional focus for all three was design for manufacturability, reliability and speed to market.
When they analyzed their customer feedback (also using CTQ) the found key differences in the 3 markets: automotive wanted high reliability within limits of unit cost; aerospace wanted high reliability combined with low cost of ownership and maintenance during a product lifespan; medical devices wanted intensive lab and field testing even if it meant higher engineering and testing costs, even delay in FDA approval in order to minimize product failures resulting in potential litigation and penalties if a product liability issue arose.
The third company had diligently conducted annual customer satisfaction surveys, used CTQ, carefully analyzed the data and translated findings into actions. For several consecutive years their overall satisfaction ratings were 8.5 on a scale of 10. They felt good about the results and allocated resources to continuously improving the lower rated items and sustaining the higher ratings.
But they experienced a trend of customers who had previously sole sourced going with multiple suppliers, others were no longer committing to the same volumes in spite of their growth, still others moved away from multi-year agreements; a few implemented performance measures tied to contract terms.
When they engaged in a different type of dialogue with their customers to determine what was happening they discovered that both new competitors and increasingly aggressive existing competitors had been receiving ratings of 9.0 or higher, particularly in the more important CTQs. Their annual survey had been inwardly focused without any comparative data about the competition.