|
We have not had any customer complaints!
By David Powley DNV Certification
Ltd.
An important mandatory element of a management
system is the regime for processing customer complaints. “We have
not had any customer complaints” is a claim that is occasionally
heard during a third-party audit.
Traditionally (i.e. pre-ISO 9001:2000) this elicited one of two counter
reactions from the auditor. One reaction is unconditional acceptance of
this apparently grand state where customers are content followed by congratulation
of the company and then a move swiftly on to audit another part of the
management system. Alternatively the auditor may be skeptical and consider
that the company may not be able to recognise a customer complaint if
it was staring one in the face. The latter reaction could in the mind
of the auditor be expressed in a question like “Is the management
system capable of detecting customer dissatisfaction?” However it
seemed that the third-party auditor had to accept that the absence of
dissent meant assent.
A complaint is an ‘utterance of grievance’ and to complain
is to ‘express dissatisfaction’. Should the organisation (and
those who audit it) be concerned only with complaints or should more be
done such as pay attention to customer opinion? There is a difference.
Whenever a customer complains he/she has come to an extreme state of mind
and is inclined toward action. That action is to take the trouble to write
(usually) and express dissatisfaction regarding poor quality of product
or service. Thus a condition has been reached where the relationship between
the organisation and its customer has already deteriorated. That is if
the organisation is fortunate – many customers may not even bother
to complain and quietly ‘vote with their feet’. It is therefore
in the interests of any organisation to continually manage relationships
with customers to a point where complaints can be forestalled or at least
anticipated so that the circumstances leading to complaints do not arise
or at least controlled. Being forewarned is forearmed – this is
basic quality risk management. So the opinion of customers is important,
even if they have not complained.
The 1994 version of standard ISO 9001/2 gave the impression that the only
necessary dialogue with the customer relating to performance was that
arising from a complaint and auditors were not able to estimate the true
opinion possessed by the customer with regard to the performance of the
organisation. This situation has changed considerably for organisations
and auditors with the publication of ISO 9001:2000. We are now in the
era of taking cognizance of customer opinion and using that opinion for
the direction of quality management systems. With this revision of the
standard an organisation must actually solicit the opinion of its customer
base and react to that opinion. Irrespective of what
standards say is it not prudent for an organisation to know what its strengths
and weaknesses are in the eyes of the people who keep in business? Considering
customer opinion obviously gives the opportunity to improve on weak aspects
and exploit the strong characteristics with a view to gaining competitive
advantage.
How is that ISO 9001:2000 demands such deference to customer opinion?
Well there are some important linkages in ISO 9001:2000 which support
this.
Clause 8.5.1 (Continual improvement) requires the organisation to ‘…
continually improve the effectiveness of the quality management system
through the use of…. analysis of data…..’
Clause 8.4 (Analysis of data) says that ‘The organisation shall
determine, collect and analyse appropriate data …to evaluate where
continual improvement of the effectiveness of the quality management system
can be made’. Later in the same clause ‘customer satisfaction
(see 8.2.1)’ is given as an item for inclusion in the analysis of
data.
Clause 8.2.1 (Customer satisfaction) states ‘As one of the measurements
of the performance of the quality management system, the organisation
shall monitor information relating to customer perception as to whether
the organisation has met customer requirements. The methods for obtaining
and using this information shall be determined’.
A simple depiction of this is:
|
The methods used to estimate customer satisfaction
are very much the preference of the organisation. The certification bodies
would not expect organisations to develop skills in survey science overnight.
On the other hand it is expected that more be done than to merely send
out a few questionnaires containing meaningless questions only to have
the fewer that are returned to sit in a file without having gone through
some analysis and reacted to.
Surveying is an acquired skill and if a thorough job were required the
services of a professional and capable organisation should be sought.
For those not requiring such expertise the exercising of a little common
sense and thoughtful effort in estimating the representative opinion of
the customer base could prove to be successful.
In an article such as this the following can only serve as simple helpful
advice and to stimulate discussion on gaining and using customer feedback.
Fuller and more learned discussions on the subject are available.
The gathering of customer satisfaction
feedback can be considered in the following framework: |
• |
What should be asked? |
• |
How should the questions be asked? |
• |
Who should ask and be asked? |
• |
What should be done with the answers? |
1. What should be asked?
The first action is to develop a question set or a basis for gauging opinion.
This will vary depending on the product, service, industry sector and
type of organisation. Above all the basis should be on quality critical
aspects. These are the performance issues relating to the product / service
that affect the relationship (good or bad) with the customer base. Some
examples are:
• |
Punctuality and comfort in a passenger
transport service or delivery times for a courier or haulage service. |
• |
Response times to enquiries or queries for a computer
hardware company. |
• |
Value for money from local authority services. |
• |
Courtesy, appearance and professionalism of visiting
service staff of a facilities management company. |
• |
Manner and attitude of call handlers in a call
centre. |
• |
Waiting times and comfort at a hospital outpatients
unit. |
• |
Consistency of quality and safety of packaging
of a chemicals distributor. |
• |
Clarity and timeliness of invoicing from any company. |
The list is almost endless. The important thing is to
decide on the quality critical aspects and then develop the question set
around them. In some cases it may be appropriate for the questions to
be as open as possible in order to allow for freedom of comment rather
than restriction to a particular agenda. In other instances specific feedback
may be required tending to need more specific questions that demand a
narrow range of possible answers.
A rating system may be used to help the answers but here again make the
scale on this almost non-restrictive so that all shades of opinion are
catered for. This can include scales such as excellent to very poor, 20%
of the time to 80% of the time or 1 to 10 and so on. The questions may
be such that a numerical answer only is possible followed by a sounding
of opinion such as ‘what is the average time from call-up to delivery
of our product to your site?’ This can be followed by a question
seeking an opinion or the level of acceptance of this time. In general
questions should be such that their answers can allow some sort of classification
which makes it easier to see strengths and weaknesses.
2. How should the questions be asked?
Putting the questions on a questionnaire may seem a fair, independent
and easy option. The trouble is that returns from questionnaires are usually
low even when there is constant follow up. The people who return the questionnaires
are those who want to yet there may be valuable opinion possessed by those
who do not return them. Therefore if not handled properly the use of questionnaires
will present a self-selection survey and its results are likely to be
misleading.
In order to get meaningful results a representative sample must be used
and this sample must respond with independence to the same question set.
This is best done by way of a ‘captive audience’ such as by
visits or the telephone. If the company uses a field or sales force this
may be the best vehicle to communicate the questions and answers. However
caution may be required as this group of personnel may not be totally
independent with regard to some or all of the quality critical aspects.
The questions should be put over in as short a time as possible. More
than 10 minutes is too long so whoever (see below) is putting the questions
should be efficient in not only in delivery but also deft and accurate
in noting the answers.
3. Who should ask and be asked?
For gathering feedback and answers it is good idea to use personnel within
the organisation who are independent of the activities creating the quality
critical aspects. For telephone canvassing a good articulate and even
voice that does not sound like it is reading from a question sheet will
give more relaxed and possibly fruitful dialogue.
Who or which companies to sample is important. The sampling can be on
the basis of revenue – there are many situations where the majority
of revenue comes from a few customers. There are many other bases for
selection such as geographical, by industry sector and others. The survey
scientists conduct their work on the basis of classifications such as
age, sex, socio-economic group etc and this has value in particular for
the retail sector.
It is important to put the questions to the right person within that company.
The customer’s main procurement person may be the first port of
call but it may be worthwhile to ensure that the end user’s voice
is heard.
4. What should be done with the answers?
The feedback took some effort to obtain so good use should be made of
it. As mentioned earlier it is not a good idea to have completed questionnaires
collecting dust. The data should be used to analyse the strengths and
weaknesses of the organisation and of course to forestall any developing
adverse situations as well as seeing an opportunity for competitive advantage.
Thus there will always be the need to invoke improvements. A third-party
auditor has no particular agenda as to what should be done with the data
except that it is used to continually improve the performance of the management
system. The true indicator of performance of the management system is
the meeting of customer requirements. So as the graphic above depicts
the re-measurement of customer satisfaction is necessary to check the
success of any applied improvements – thus the cycle repeats.
The above framework is not particularly sophisticated or difficult to
understand or work to and as mentioned earlier, there are appropriate
reference works available on this subject. The third-party auditor has
no particular pre-conception other than to expect that some logical and
rational methodology be employed and that the results of the exercise
are put to use.
Finally it is worth putting into context the importance of including the
gauging of customer opinion within a management system standard.
Certification bodies obviously value their clients but there are other
interested parties. These are the customers of those clients who place
reliance on the fact that they are dealing with organisations that hold
a certification to a management system standard and that the management
system concerned has customers in mind. In this sense the certification
bodies self-assume a great responsibility. The customers would like to
feel that their opinions and experiences are being noted. When this opinion
is being processed and responded to with continual improvement within
the management system then this can only be good. Continual improvement
is in fact what quality management by management system was meant to achieve.
It will now be even less likely for poor performing organisations to achieve
ISO 9000 certification, if that were ever possible. This is likely to
associate ISO 9000 certification with even more credibility within industry
and the general public.
David Powley is a Principal Lead Integrated Management
Systems Auditor for DNV Certification Ltd. He is a Chartered Chemist
and Member of the Royal Society of Chemistry, Member of the Institution
of Occupational Safety and Health, a Principal Environmental Auditor
with the Institute of Environmental Management & Assessment, a
registered Lead Auditor with the International Register for Certificated
Auditors scheme for quality management systems and Lead Verifier for
EMAS. David has produced many published articles on management systems
for quality, environment and health & safety and their integration,
being regarded as a pioneer on the subject of integration. He is currently
finalising an experienced-based book on the subject of integrated
management systems. David can be contacted on dave.powley@dnv.com |
top of page
|
|