Today’s marketers have access to a growing set of tools to measure satisfaction, driven by online innovation, and are applying them widely as the need to demonstrate a return intensifies. Consequently, customers are increasingly exposed to invitations to provide feedback, and are naturally becoming savvy to the methods employed and selective in their decisions to respond.
This has implications both for achieving robust results regarding the volume and profile of response, and in ensuring attempts to measure customer satisfaction aren’t having a negative impact. In jointly addressing these some rules-of-thumb emerge:
Make it brief but flexible: Many customers will be discouraged by a lengthy process, so surveys should be short and customers made aware of this from the start. Some customers will want to provide more feedback, requiring optional additional questions and allowance for free text comments – otherwise it’s a wasted opportunity to gain a more in-depth understanding. Additionally, they will feel let down if there’s insufficient opportunity to express themselves.
Explain the process and benefits: “Please share your experience of our service…” provides little motivation to act. But adding in a demonstration of willingness to make changes and to explain the potential benefits arising will improve engagement.
Use a multi-channel platform: Initial contact via a range of channels, and providing a choice of response mechanisms, will enhance both the customer experience, and the reliability of results. Profiling responses by channel enables programme adjustments to improve the representativeness of the sample, and/or to apply weightings to account for response bias.
Use incentives with caution: By all means follow up a programme with a gift and a ‘thank you’, but avoid the pitfall of giving any impression the customer will be disproportionately rewarded according to the nature of their response.
Provide feedback: When customers respond on the understanding their views will change things, they should be informed when subsequent changes are made. If they make a genuine connection between their expressed views and their subsequent brand experience, greater possibilities open up for brand advocacy or engagement in on-going survey programmes.
Integrate it: When gaining feedback against a brand, in the customer’s mind it’s just another communication from that organisation. The entirety of that customer’s brand interactions therefore must be taken into account – a ‘single customer view’ means just that, no exceptions.
When in doubt, test: CRM, when applied correctly, provides for measurability. Any variable, for instance alternative question texts and communications channels can, and should be tested against a control, providing genuine evidence to support or reject any pre-supposed views.
In a well designed customer satisfaction programme there is no conflict between meeting the requirements of the brand and gaining willing customer participation. These are interdependent and should be brought together to form one piece in the jigsaw of an integrated CRM strategy.
Simon Steel is insight and digital director at Eclipse Marketing
To leave a comment please register – it takes less than a minute and is free of charge. You will also get our weekly email update The DM Report (to opt out contact firstname.lastname@example.org). If you are an existing user, please log in. If you have forgotten your log-in details please email email@example.com to get them reset!