hygiene zone
quality tools
quality techniques
human issues
quality awards
quality extra
visitor tools


Stay Informed
Sign up below to receive our Occasional Newsletter.

We Respect Your Privacy!

Web SaferPak
SaferPak: Food Packaging Safety, Food Safety, Business Improvement and Quality Management
       Home     About     Contact

Please choose a subject from the drop down menu:

Planning the project

How many customers should I interview?
How many responses do I need for a reliable result?
The sample should be large enough to be reliable and representative of your customer base.  As a general rule of thumb 200 interviews would be enough for a homogenous customer base, but more might be required to adequately represent a diverse customer base.  In particular if you wish to analyse the results for subgroups then you should make sure that you have enough interviews to be reliable at subgroup level (at least 50 would be recommended).

The Leadership Factor has developed an empirically-based formula that will predict the confidence interval within which a given sample size will allow you to estimate the population Satisfaction Index™.  This tells you, for example, that a sample size of 500 will give you a confidence interval of around +/-1.1% (dependent on the exact variance of your survey scores).  In other words if your Satisfaction Index was 80% then you could be 95% sure that the true population score was in the range 78.9% - 81.1%.
top of page

How do I decide which customers to interview?
To be reliable a sample should be randomly selected and representative of the population.  Random or probability sampling means that every population unit (customer) has an equal chance of being interviewed.  In practice the most common method is systematic random sampling, For example if you had 2000 customers and you wanted a sample of 200 (i.e. 1 in 10 customers) you would select every 10th customer from a random starting point (e.g. 2nd, 12th, 22nd…) for interview.  To ensure that the sample includes adequate representation from various subgroups, or gives greater weight to high-value customers, you can use stratified sampling – essentially this involves sampling within each subgroup as if it was a population in itself.
top of page

Should I use an agency or conduct the survey myself?
What benefits does using an outside agency bring to my survey?
Agencies offer a number of advantages, particularly for organisations without their own research departments.  An agency will have more experience, greater statistical knowledge, specialist software and resources in the form of field or telephone interviewers and data entry mechanisms.  They may also be able to benchmark your data against other organisations.  Moreover an agency will provide your survey with the important assurance of independence, and should guarantee the confidentiality of your respondents by adhering to the MRS Code of Conduct.  These can make all the difference to the credibility of the survey internally and externally.

The trade-off is one of cost in terms of outlay and cost in terms of internal resources.  The first thing to consider is how much of the work you could realistically undertake yourself – if you’re sending out 20,000 postal questionnaires do you have the facilities to input the data quickly and accurately?  You need to make sure that you have an accurate idea of what undertaking the survey yourself will require (in terms of knowledge and resources) before assessing the cost of an outside agency.
top of page

How much will it cost me to use an agency?
There are too many variable costs to give an accurate guide as to how much using an agency for a survey will cost.  For a bespoke survey tailored to your organisation you could expect to pay a minimum of £5,000, rising according to length of questionnaire, number of interviews and interview method.
top of page

How can I be sure what my customers’ requirements are?
The best way to understand your customers’ requirements is to ask them by conducting exploratory research.  In a consumer environment this will consist of focus groups, while in a business to business market individual depth interviews are normal.
top of page

How do I introduce the survey?
If you are planning to conduct telephone or face-to-face interviews respondents should be prepared to expect your call by sending them an introductory letter detailing the purpose of the research, and reassuring them that their confidentiality will be protected.

For postal questionnaires it is usually sufficient to include a letter with the questionnaire, though it is always beneficial to promote the survey more intensively if this is economically and politically feasible.
top of page

How do I persuade people to take part?
How can I maximise response rates?
Fundamental to achieving a good response rate is a good database, particularly in business to business markets, with up-to-date names and job titles.  For postal surveys a reply-paid envelope is essential, and response will be boosted by a series of follow-up reminders and duplicate questionnaires.  The key part of a postal questionnaire mailing is the introductory letter that accompanies the questionnaire, which should focus on the benefits to the respondent of completing the survey (in terms of improved service etc.).  Other tactics that have been shown to have a significant impact on response rates are pre-notification (ideally by telephone although this is very expensive), design of the questionnaire (it should be attractive and above all it should look easy to complete) and the offer of a financial incentive for every respondent (not a prize draw).  Other tactics tend to be expensive and much less cost-effective.  The most cost-effective process while maximising response rates would be:

• Notify customers to expect the questionnaire
• Mail attractive questionnaire with reply-paid envelope and intro letter selling benefits
• After two weeks, follow-up letter with duplicate questionnaire to non-respondents
• After a further week telephone reminder to non-respondents
• After a further two weeks a second duplicate questionnaire and letter

However the best and most reliable way to maximise response rates is to use telephone interviews, where again a reliable database is necessary and pre-notification is advantageous.
top of page

Should I use an incentive to encourage people to respond?
Incentives should be chosen very carefully.  It has been shown that free gifts, coupons and donations to charity have no favourable impact on response rates.  Prize draws are also ineffective unless the prize is very large.  The inclusion of money with the questionnaire (even modest sums, such as $1 in the USA) has been shown to have a beneficial impact, but other tactics for boosting response rates should be used first, as they tend to be more cost-effective.  See ‘How do I persuade people to take part’.
top of page

What is a good response rate?
How many responses should I expect?
Response rates vary greatly dependent on your type of customers and their level of involvement with you.  For CSM, postal questionnaires typically attain response rates of around 20%-30% with a single mailing in a business-to-business market.  You should aim to get at least 50% to minimise the danger of non-response bias, while over 65% would be preferable.  With telephone interviews using multiple call-backs response rates of 80%+ can be obtained.
top of page

Do I need to conduct exploratory research?
To make sure that you have an accurate picture of customer requirements you should conduct exploratory research every three years.  There is a key difference between surveys that use exploratory research and those that are compiled as a result of opinions within the company.  This difference has been described as the ‘lens of the customer’ as opposed to the ‘lens of the organisation’.  The ‘lens of the organisation’ tends to be product rather than benefit focused, and with the best of intentions can sidestep or under-represent issues that are vital to customers.
top of page

How do I take regional/national/social character differences into account?
It is likely that social and geographical differences may have a part to play in your research if you have a wide customer base.  This can be addressed by segmenting your respondents along these lines.  Questions that can be used as a valid benchmark are a good way to assess differences in how easy customers are to satisfy – for example within the UK one could ask respondents how satisfied they are with the BBC as something which is universally familiar.  In modelling segmentation may be key to discovering the different drivers of satisfaction for different segments, a field where Latent Class Regression comes into its own.
top of page

Does changing CSM methodology make historical data obsolete?
It can be a very tricky process changing survey methodologies, and doing so can often threaten the comparability of results if they are being tracked.  Where, for instance, bonuses are dependent on satisfaction scores this is a critical issue to handle well and transparently.  A common example is for companies to want to change from a five-point verbal scale to a ten point numerical scale.  Where money is no object the best approach would be to run both scales simultaneously for a time to allow the conversion of scores according to a linear regression equation.  At minimal cost, one question (e.g. overall satisfaction) can be repeated as opposing ends of the interview using the two different scales. Although many people may advance persuasive arguments against change, one has to consider to what extent the old methodology is flawed – tradition is not a good reason to stick to something that is wrong.
top of page

Exploratory research

Should I use focus groups or depth interviews?
As a rule focus groups are appropriate in a consumer market, while individual depth interviews should be used in a business-to-business environment.  However, with particularly sensitive areas (for example if you are a manufacturer of personal hygiene products) focus groups may not be appropriate.
top of page

How do you facilitate a focus group?
It is crucial to encourage all participants to take part, and prevent one or two members of the group from dominating.  For CSM, the first half of the group should be an open discussion using a variety of projective techniques and stimuli (such as theme boards).  Past instances of particularly good or poor performance are often a good way to get respondents talking about what’s important to them.  The second half of the group should be more structured, asking the respondents to help generate and then individually score a list of requirements for relative importance.  The 15 or 20 most important requirements as judged by respondents from all groups should be carried forward to the main survey.
top of page

How do you conduct a depth interview?
For CSM, an interview would often start by asking the customer to imagine that they had no supplier of [Widgets] and describe the sequence of events from the first suggestion that a supplier was needed to the selection and then evaluation of a supplier.  This should cover who is involved at every stage, what role they play and what things they are looking for from the supplier.  From this discussion a list of (neutrally worded) requirements should be generated and then scored in order to assess their relative importance.  The 15 or 20 most important requirements across all respondents would typically be included on the questionnaire for the main survey, as this will cover customer priorities without being too lengthy.
top of page

Is exploratory research unreliable due to the small sample sizes?
Exploratory research is qualitative in nature, so reliability in a statistical sense is not necessary.  The aim is to fully understand customers’ requirements so that the main survey can measure satisfaction with the right things.  The interpretation of qualitative data is by necessity subjective to a degree. If it is felt necessary a second stage of exploratory research that is quantitative in nature can be conducted with a statistically reliable sample prior to the main survey in order to score a full list (perhaps 50 or 60 requirements) for importance.
top of page

Designing the questionnaire

How should I decide what questions to ask on my questionnaire?
Satisfaction and importance requirements should reflect customer priorities as determined through exploratory research.  Other questions should only be introduced if they are related to the main thrust of the survey and add value to the research, for example classification questions such as social grade or age.  Unrelated questions added for their own sake detract from the quality of the survey and absorb valuable time.
top of page

How many questions should I have on my questionnaire?
A general rule is that there should not be more than fifty questions on a questionnaire.  Typically this would comprise around 20 requirements scored for both importance and satisfaction plus a few classification questions.  In terms of interview time – around 10 minutes is the limit in consumer work, while in business-to-business markets 15 minutes is acceptable.  In general the higher the level of involvement you have with your customers the more of their time they will be willing to devote to your survey.
top of page

Research Methodology

Should I use telephone interviews or self-completion questionnaires?
On balance, self-completion questionnaires will be very cost-effective only if a good response rate can be achieved.  This will depend on the level of involvement that your customers have with your organisation – will they feel strongly enough about their relationship with you to bother filling in a questionnaire?  Only organisations with a large customer base will be able to guarantee a sufficient number of responses from a postal survey.  For most organisations that are not in a particularly high involvement product area, and don’t have a large customer base, telephone interviews are the most cost-effective option.  They also provide a wealth of qualitative information that postal surveys cannot.
top of page

How long should an interview take?
Around 10 minutes is the limit in consumer work, while in business-to-business markets over 15 minutes might be acceptable.  In general the higher the level of involvement you have with your customers the more of their time they will be willing to devote to your survey.
top of page

What kind of scale should I use?
For CSM work the best scale is a ten-point numerical scale with endpoint anchors, as this has interval properties, is easy to understand and administer, and provides a good amount of sensitivity.
top of page

How do I make sure respondents match my quota?
Controlling a quota can be extremely difficult if it is a complex one.  For most CSM work it is sufficient to make sure that all the major sub-groups are adequately represented.  Filtering questions should be used at the start of the interview to make sure that respondents fit the quota remaining to be filled.  Quotas are much easier to control from a call centre where details of interviews completed can be updated on a rolling basis.  For self-completion questionnaires it is virtually impossible to meet quotas exactly, but if it is felt necessary to do so, responses can be weighted prior to analysis.
top of page

What is the value of a ‘mirror’ survey?
A mirror survey allows you to get an accurate idea of how employees view customers’ requirements and perceptions.  Comparing these with customer scores allows you to identify ‘understanding gaps’ where employees have an inaccurate view of customer priorities, or where they are under or over estimating the organisation’s performance.
top of page

Analysing the data

How do I calculate a Satisfaction Index?
A Satisfaction Index is the sum of the weighted satisfaction scores for each requirement expressed as a percentage.  To weight each satisfaction score you divide the importance score for that requirement by the sum of all the importance scores and multiply the satisfaction score by this figure.
top of page

Which software should I use?
For data entry it is unlikely that it will be cost-effective to invest in specialist software (or hardware such as scanners) unless you have very large numbers of questionnaires to process on a regular basis.  The best solution may be to get an external supplier to enter the data for you, as this is likely to be faster and more cost-effective for an occasional survey.  However with this approach accuracy must always be a consideration, as experience has shown that data-entry houses tend to be less reliable than doing it yourself.  Ideally, manually entered data should be keyed in twice and the two files compared for discrepancies. Again, unless you undertake a lot of survey work it is unlikely to be cost-effective to invest in a specialist software package such as SPSS, SAS or Statistica for analysis (although it is worth checking whether your organisation already has one of these – for example SAS is used in some accounting applications).  Spreadsheet software such as Microsoft Excel is capable of performing the analyses that you will need, and the addition of add-ins such as the Analysis ToolPak will allow even more powerful techniques to be used, although they tend to be less intuitive to implement than in a specialist package.  Spreadsheet software is also excellent for producing charts ready to transport to a word processing or presentation package for reports and presentations.
top of page

What value is benchmarking the results?
There are two types of benchmarking: against other organisations generally and against your immediate competitors.  The former represents the way that customers form judgements about you – for example they will not rate your speed of service only against other high-street widget shops, but against all suppliers where speed of service is a pertinent factor.  Benchmarking against your immediate competition gives you important information about your relative standing in your market.
top of page

How do I know if my result is ‘good’?
Satisfaction tends to be a relative judgement, and so it is only by benchmarking that you will really know how you perform relative to other organisations.  However, as a general guide, on a ten point scale a mean score of eight or above could be regarded as good, while mean scores of over nine are excellent.
top of page

If the questions change is it still valid to make comparisons?
The benefit of calculating a Satisfaction Index is that (assuming the questionnaire was formulated based on customer requirements) it is an accurate measure of overall customer satisfaction, and so can be compared from year to year.  However the important question to ask is ‘why are the questions being changed?’ – if it is because a new piece of exploratory research has suggested that customer requirements have evolved then this is a valid course of action, otherwise it may be an entirely arbitrary decision by the organisation and would mean that the Satisfaction Index was no longer representative of customer satisfaction.
top of page

What’s better, mean scores or top-box scores?
Mean scores reported with standard deviations give much more information than top-box scores.  Top-box scores are an adequate measure of the number of very satisfied customers, but they take no account of the dissatisfied customers or of the consistency of service delivery – something that lies at the heart of Six Sigma.  For example, consider two ranges of scores from ten respondents:


Factor A
Factor B
St. Dev.
Top two box:

The mean and standard deviation give you complete information, and a much more accurate picture of your true performance, where the top two box score only reveals part of the story.
top of page

Post survey

How often should I conduct surveys?
The main survey should be updated annually, or more often in fast-changing markets.  Between surveys it is a good idea to monitor progress on key areas that have been targeted for improvement or conduct surveys gathering more qualitative data on why certain areas are problematic.  Exploratory research need only be conducted every three years to monitor the evolution of customer requirements.
top of page

What should I tell my customers about the results?
When feeding back the results it is crucial to act quickly, ideally with full communication of the findings of the survey together with intended actions.  If possible the feedback should be able to detail some process improvements that have already been put into place as result of the survey.
top of page

What’s the best way to feed back information to customers?
The media to be used will depend on the type and number of customers you have.  If you have a small number of very high value customers a personal presentation will be appropriate, other options include the use of feedback reports, the use of existing channels such as newsletters and advertising.
top of page

Which customers should I tell the results to?
It is recommended that the results should be communicated to all customers if feasible, although this may not always be possible if you have a very large customer base.  One of the key parts of improving customer satisfaction is to let customers know what you’re doing (or better what you’ve done) to address any problems.  The best tactic for feeding back the results (as long as the organisation is committed to actioning the results) is to give a brief summary of all the key findings in a feedback report (or other media such as newsletters/advertising) together with an action plan, and if possible a summary of some actions already implemented.
top of page

What should I tell my employees?
Customer satisfaction has been shown to relate strongly to employee satisfaction (a two-way relationship).  One of the main findings that often emerges from a poor employee survey result is that employees feel that customers are unhappy with the service they provide.  Involving employees at all levels with a programme of improvement will often generate good ideas for addressing particular issues (carrying out brainstorming workshops is one method), and will help to ensure their commitment to the programme.  To maximise employee interest in and commitment to the improvement of customer satisfaction it is crucial that the findings are communicated fully so that they understand them, and that there is visible senior management commitment to actioning the results.
top of page

How do I feed back a ‘bad’ result?
When reporting a disappointing result it is important not to try to gloss over it or fudge the issue.  It is crucial that, prior to the meeting when the result has to be communicated, you have an absolute understanding of the methodology of the survey and can defend it, since this is always the first thing the audience will attack if they don’t like what they hear.  It will be very helpful to be able to make recommendations about how the areas of concern should be addressed, so that the meeting can focus more on ‘where do we go from here?’ rather than ‘what went wrong?’.  It is also a good idea to find some positives to communicate in order to soften the blow, especially if they may shed some light on a possible solution – for example ‘our speed of service is terrible in the South-East, but the result for Nottingham is good – what are they doing differently?’.
top of page

How should I communicate the results to senior management?
As a rule senior management are not interested in the nitty-gritty of a detailed report.  The best tactic is usually to give them a very short, punchy summary of the results in a small report (perhaps only 2-6 pages) and presentation – they can always come back to you for more detail if they want it.
top of page

What would be a sensible target to set for improvement?
Organisations need a target to aim for, however it is important not to aim too high in terms of satisfaction improvement, as it tends to be a slow process.  The higher satisfaction levels are in the first place the harder they are to improve.  For example, it is very difficult to make significant improvements on a Satisfaction Index that is already above 90%.  By contrast, a Satisfaction Index below 60% suggests a level of performance that could and should be improved substantially, perhaps by 5% in the first year.  An average Satisfaction Index of 75-80% can realistically be improved by 1% to 1.5% per annum.
top of page


Why is CSM better than mystery shopping?
CSM and mystery shopping should not be regarded as alternatives.  A mystery shopping programmes is a valid technique for monitoring performance standards against internal criteria, but it does not measure customer satisfaction.  The differences between the two include:

• Mystery shoppers score service on criteria set by the company, not on how satisfied this level of service    would make a customer
• Mystery shoppers score according to a preset agenda, not on what customers define as important – they use    the ‘lens of the organisation’ not the ‘lens of the customer’
• Mystery shoppers are rational and objective, customers make judgements emotionally and irrationally
• Customer satisfaction is seen as more credible, as employees believe they can spot the mystery shoppers

Mystery shopping identifies specific problems in specific outlets at specific times, but it does not give the statistically reliable overview that CSM does, and so does not allow you to build strategic plans based on sound company-wide knowledge.
top of page

Is CSM relevant to my organisation?
All organisations with customers need to know how satisfied their customers are – even where a virtual monopoly exists at the moment there is always the danger that a competitor will appear, and in any case satisfied and loyal customers come back from choice more often than dissatisfied customers come back when they have no option.  Unless you are in the rare situation of having only one or two customers (and indeed only a few customer contacts) the only reliable way to understand how satisfied your customers are is by carrying out a full CSM survey. 
top of page

How does CSM fit into the new ISO9000 standard?
The new ISO 9000:2000 standard recognises, like other approaches such as Six Sigma, that customer satisfaction is a key metric in a continual improvement quality system.  Without reference to customer requirements any performance criteria are arbitrary, and frequently more product than benefit oriented.  The new standard requires organisations to monitor levels of customer satisfaction.
top of page

Why should I measure customer satisfaction?
The short answer is because having satisfied customers will make you money.  Customer satisfaction has been shown to link to customer retention and commitment.  The cost of customer acquisition (estimated 5-20 times more expensive than retaining a customer) speaks for itself.  Beyond this it has been shown that long-term customers show beneficial behaviours such as complaining less and returning goods less.  Committed customers recommend you, buy more products, buy more often and shop around less. In order to drive up your levels of customer satisfaction you must monitor what’s important to them and where you are performing well or poorly in order to target improvements effectively.  The only reliable way to do this is with a customer satisfaction measurement survey.
top of page

Why should I measure employee satisfaction?
Employee satisfaction and commitment have been shown to have a strong influence on levels of service and customer satisfaction.  They also tend to result in employees taking fewer sick days and being more prepared to go ‘the extra mile’ to get the job done.  You should also expect to find improved levels of employee retention with higher levels of satisfaction – meaning more consistency and a growth of experience as far as the customer is concerned.  As with customer satisfaction, the only way to properly understand how satisfied your employees are is by asking them what is important and how well they think you are performing.  A crucial hurdle to overcome is the ‘yes, but nothing ever happens’ attitude.  It is therefore essential to demonstrate that the organisation is prepared to take action as a result of its employee survey.
top of page

Satisfaction - ISO 9000:2000
Satisfaction - Measurement
Satisfaction - Ideas & tips
Satisfaction - FAQ's
Satisfaction - Jargon review
Customer Loyalty







Back to previous page






top of page


home :: about :: contact :: terms

© 2006 SaferPak Ltd.