|
The Future of Statistics in
Quality Engineering and Management
By Professor Tony Bendell
1. Does statistics
have a future in supporting quality improvement?
Although statisticians themselves are clear on
the contributions that their discipline has made historically in various
aspects of engineering and industry, others are less sure and statisticians
themselves continue to be concerned that engineers do not take statistical
literacy sufficiently seriously.
Is this fair?
The birth of the Royal Statistical Society owed much to the need to understand,
estimate and control variation in industry. The application of experimental
design methodology and analysis of variance, as a single example, has
been crucial to understanding, optimising and controlling complex industrial
processes. Statistical quality control and process control through the
contributions of Shewhart, Deming and others have provided the basis for
stable controlled industrial processes, and the development of statistical
forecasting methods has provided an ability to forecast future sales,
profits and difficulties. And there is so much more.
But somehow it has not all worked.
The current state of industrial application of statistics does not live
up to its glorious creative past. We have statistical methodology but
not the clarity of purpose or the market image that facilitates its use.
When statistically based Quality and Reliability methods are used they
are not always used correctly.
Taking stock of the current and future state of the application of statistics
in quality improvement in industry and commerce, the first question must
be do the old needs for statistics still exist? Have the questions changed?
Technological change and a change of mind set have indeed clearly made
their mark. Just as for most purposes we no longer need tables of logarithms,
computing power and on-line instrumentation have converted some of the
analysis, forecasting and control issues of the past into history. Of
course, change has also created new problems, as old methodologies prove
inadequate and new needs are revealed. But the problems are more complex,
more complicated and more academic – and therein lies the real problem.
From being a well-founded routine analysis process, employing relatively
large numbers of statistical assistants to carry out the laborious calculations
of numbers according to an assured well-defined unchanging enumerative
framework, statistics has evolved into an elitist, remote, obtuse –
and for many in industry and commerce, unnecessary – set of approaches
and people. There are two dimensions of this change. Firstly, the work
of statistics has changed: computational power together with the switch
in statistical emphasis from enumerative to analytic studies, which draw
inferences about future but currently badly defined processes, mean that
statistics has become less automatic and hence less accessible. Secondly,
statisticians have not adequately tackled the consequent public relations
challenge and, by some of their more “academically interesting”
work, have added to the “bad press”. For example, O’Connor
(1991), in arguing that statistical methods for quality and reliability
prediction and measurement are counter-productive and should be discarded
in favour of a return to traditional engineering and quality values, stated
“They lead to over-emphasis on expensive, bureaucratic and esoteric
approaches to quality and reliability. Many successful equipment designers
and manufacturers generate highly complex yet reliable products without
recourse to these methods.”
Similar points were made by some respondents in the recent study by the
Engineering Quality Forum on the Quality Education of Engineers (Cullen
et al, 1997). The real question therefore is that, with all this bad press,
does statistics and statisticians, have a future in industry and business
at all?
The gap between the potential of statistics to help quality improvement
and what is actually achieved is not a specifically British or specifically
Western phenomenon. It was present also in the early days of the introduction
of statistically based quality improvement methods into Japan and for
much the same reasons (Ishikawa, 1985). There is evidence also of a similar
situation in the USA and Germany (McMunigal et al, 1990; Bendell, 1994).
The subsequent success of statistically based quality improvement methods
in helping to transform the Japanese economy is evidence not just that
this problem is solvable but that in bringing the messages and methods
of statistics to the people in business and industry they do need to be
clarified, simplified, communicated and “packaged”. The emphasis
in Ishikawa’s work of the simplification, “packaging”,
mass education, team basis and consequent mass use of statistical tools
by all or many employees (eg the “seven tools of quality control”)
reappears also in the work of other Japanese quality gurus such as Taguchi
and Shingo (Bendell, 1991). The emphasis is on making statistical techniques
understandable and usable by the customer and not on blaming the customer
for not understand or not using them.
It can be argued that the need is not just to re-educate others: the engineers,
managers and other professionals who just do not appreciate the importance
of variation and the part that statistics has to play. Even more importantly,
the need is to re-educate the statistical community. For statistics to
have a future, now is the time for statisticians to come out of their
closets, to cross the boundaries into the real work problems, to avoid
unnecessary complexity, to start their role earlier in the project and
to end it later, to become fully integrated, to lose their “statistician”
stigma, to become the facilitators of large-scale simplistic routine application
of statistical methods by all workers (like Ishikawa’s seven tools
of quality control). There are few enough statisticians left in industry
(eg Greenfield, 1996) and the need and the opportunities are strong; maybe
never stronger. All around we see examples of the lack of use, misuse
and abuse of statistics.
The responsibility is not only with those in industry. The role of academic
statisticians, and the academic tradition of statistics, is also in much
need of attention. It is here that the greatest elitism and barriers are
created and carried on through education to future generations. And it
is here that the greatest opportunity exists for a new ethos of statistical
service; of removing the jargon, complexity and elitist barriers and of
creating clarity, simplicity and focus.
To illustrate many of the points in this section, we shall now discuss
three key areas of statistical application in quality.
2. Quality and performance
2.1 The impact of Taguchi
The principal role of statistics and statistical methods in industry
and business is to improve performance and to provide support for quality
improvement, including increasing productivity and product quality in
manufacturing and improving service quality. Throughout the 20th century
developments in the application of appropriate statistical methods have
been made by Shewhart (1931), Deming (1986), Ishikawa (1976), Fisher (1925,
1935) and Box et al (1978) among others.
The quality revolution of the 1980s and 1990s in the West provided an
opportunity for statistically based quality improvement techniques to
gain a larger foothold and Taguchi methods appeared in Europe and the
USA at this time. Consequently, owing to their timely arrival, they achieved
more prominence in the industrial and business community than they may
have otherwise achieved; one outcome of this has been that they have subsequently
attracted more critique from the statistical community than the important,
statistically well founded but less generally publicised, work of Fisher
(1935), Box et al (1978) and Wheeler (1987).
Taguchi’s major achievement, however, has been in making experimental
design and statistical techniques accessible to engineers without the
need to understand the detail of statistical theory (Taguchi, 1987). His
techniques and their application (Bendell et al, 1989) have been critically
reviewed (eg Box et al, 1988, Logothetis and Wynn, 1989, and Nelder and
Lee, 1991, among others) and the Quality Improvement Committee of the
Royal Statistical Society has initiated debate on their viability. However,
the fact remains that Taguchi has introduced many engineers to experimental
design for the first time and made them aware of the importance of systematically
designing quality into a product or process rather than inspecting it
out. Before Taguchi’s impact, many engineers and managers engaged
in fire-fighting exercises to provide a short-term solution when the need
arose, or conducted experiments involving one factor at a time (as some
still do!), which ignored all interactions or sometimes focused their
attention on just a small number of factors which could be handled within
a full factorial design.
Nevertheless, Taguchi experimentation has not always succeeded in the
West. Many of the organisations that have implemented Taguchi methodology
or other experimental design techniques have been disappointed with the
results and have consequently blamed the techniques when in many cases
it is the management of the implementation and the quality culture of
the organisation which are at fault. Bendell et al (1990) cited an automotive
component manufacturer and a chemical processor who both experienced failure
with Taguchi methods due to an over-optimistic expectation of their ability
to handle experimentation alongside other demands on manufacturing plant
and time, whereas Disney (1996) described a paint manufacturer whose experimentation
was ruined by an inability to measure the performance indicator accurately.
Taguchi experiments are more manageable and predictable and require fewer
resources than full factorial designs, so it is much easier to plan their
implementation, in that results are available at the most opportune time.
It would, however, be misleading to suggest that all Taguchi’s proposed
techniques are the most efficient or powerful tools or that they are over-appropriate
in many situations. Furthermore many of the companies which have implemented
Taguchi methods have only embraced the basics. Even highly successful
initial experiments are not always followed up, as illustrated by the
food packaging case-study discussed by Disney (1996). Many experiments
that are conducted use only the most popular orthogonal arrays such as
L4, L8, L9, and L16.
2.2 Simple statistical tools
There appears to be a tendency for statisticians to use
overcomplicated tools and techniques. This may result from their awareness
of the sensitivity of the underlying assumptions or it may be seen as
an attempt to justify their role in an organisation! However, rather than
convincing the organisation of the merit of having a statistician, this
often has the opposite effect if no-one else can understand the statistician’s
output.
The seven tools of quality (Ishikawa, 1976) and basic exploratory data
analysis techniques (Tukey, 1977) remain the most powerful and effective
tools, together with graphical data presentation methods, yet they are
often neglected as trivial. The seven new management tools of quality
control (Mizuno, 1988) have a technical organisational basis rather than
a statistical foundation, yet they link in very well with many basic statistical
techniques and should not be ignored by statisticians. These new tools
are as follows: the relations diagram method; the K3 method (affinity
diagram); the systematic diagram method; the matrix diagram method; the
matrix data analysis method; the process decision programme chart method;
the arrow diagram method.
Many statisticians find themselves called on to “troubleshoot”
or “fire-fight” and their first task is to identify the major
problems. The Pareto chart, applied to data obtained through basic data
collection methods, is a simple and effective tool and has the advantage
of being visible and easily explained. The next task is to identify the
cause of these problems and brainstorming will lead to a cause-and-effect
diagram; drawing a flow chart of the process will often lead to the identification
of bottle-necks that may in themselves be causes of trouble. The identification
of common and special causes of variation in the process can also lead
to significant improvements, especially if the effects of common causes
can be reduced (Deming, 1982). Many errors can be prevented by using the
poka yoke approach to error prevention developed by Shingo (1986), who
after 20 years of statistical quality control declared in 1977 that he
was “finally released from the spell of statistical quality control
methods” (Shingo, 1985).
SPC techniques can be applied to most processes in a manufacturing setting
and many service sector activities. Although statisticians may argue over
the precise meaning of charts (Alwan and Roberts (1995) and contributors
to the discussion), the fact remains that SPC is a powerful tool for identifying
stable and unstable processes. In many organisations, however, it is used
as a “historic tool” to show where problems occurred, yet
if it is applied on the shop-floor it is a simple method for signalling
the need for corrective action which can be taken by the operator, possibly
preventing the production of defective items in a manufacturing line or
the delivery of a substandard service.
To properly promote the use of these tools, statisticians should communicate
their benefits with clear examples of quality and productivity improvements
coupled with bottom line savings. With the current range of software that
is available, there has never been a better time to introduce simple statistical
methods into manufacturing and service organisations, yet there is still
little take-up of these techniques in most companies.
Just as operatives are now expected to be multiskilled, maybe working
in a cellular manufacturing system rather than on a conventional assembly
line, so Quality statisticians should be multitalented. They must be able
to assist in the design, planning, production, quality control, distribution,
sales and customer service functions, being equally adept at designing
experiments and analysing results, forecasting sales figures and interpreting
market research. They must be willing to use the seven new management
tools, Kanban (Wild, 1995), the five Ss (seiri, seiton, seiso, seiketso
and shitsuke, translated as CAN-DO meaning cleanliness, arrangement, neatness,
discipline and order), and Kaizen (Japan Human Relations Association,
1992) in addition to more overtly statistical tools in order not only
to sustain but also to prosper.
2.3 Process analysis and control
Statisticians need to be careful not to become more involved
with the detail rather than the message behind the details. With this
line of reasoning what has SPC to do with the normal distribution? As
Henry Neave has frequently and eloquently pointed out, Shewhart never
intended that one should be the foundation of the other. Statistics and
statistical education have become obsessed with the minutiae, missing
the message. Although theoretically based, W Edwards Deming’s and
Walter Shewhart’s contribution in spreading control charts was about
controlling real processes not about statistical algebra (Palm et al,
1997). Strangely, in the way that SPC is so frequently taught at UK universities
and abroad, it is the algebra that is retained. We then seem surprised
that SPC is not applied or is applied clearly incorrectly. Indeed, the
current extent of bad application is extensive (Shaw et al, 1997).
It is now widely recognised that there is a very real need for process
analysis management and control, not just in the context of manufacturing
processes, but of all business processes. This process emphasis is the
centre of the modern concept of quality management and is reflected in
the formulation of the EFQM Excellence Model, business process re-engineering
methodology and benchmarking activity (as described, for example, in Bendell
et al, (1993)). But the idea is not new – it was inherent in the
early approaches of Deming and the other “missionaries of quality”
to Japan in the early 1950s and later in the early concepts of total quality
management by the US Department of Defence. Two quotes from W Edwards
Deming illustrate the simplicity, clarity and importance of this message
(Deming, 1993):
“Draw a flow chart for whatever you are doing.
Until you do, you do not fully understand what you are doing. You just
have a job.”
“The first step in any organisation is to
draw a flow diagram to show how each component depends on others. Then
everyone may understand what his job is. If people do not see the process
they cannot improve it.”
Clearly, with the current emphasis on improving business processes,
the implication is that there is a very real potential for a widely developing
use of process analysis, process effectiveness, measurement and control.
Equally clearly, some statisticians would see this as “not proper
statistics”. That is unfortunate: it is consistent with Shewhart’s
and Deming’s original purpose and takes us beyond the level of statistical
algebra.
3.
Standards and awards
At the beginning of this paper, we asked the question whether
statistics has a future in supporting quality improvement and argued that
statisticians’ attitudes were potentially the biggest danger to
the existence of that future. The debate about BS EN ISO 9000 is a classic
illustration of that problem.
In January 1994 Adrian Stickley and Alan Winterbottom
read a paper to a meeting jointly organised by the Business and Industrial
Section of the Royal Statistical Society and City University on “The
nature of quality assurance and statistical methods in BS 5750”
(Stickley and Winterbottom, 1994). Rather than concentrating on the opportunities
that the international quality systems standard, BS EN ISO 9000 (formerly
known in the UK as BS 5750 and referred to hereafter as ISO 9000) offers
for statistics and statisticians to be employed in industry and commerce,
the paper attacks the standard! Actually, attacking ISO 9000 is very easy,
and indeed very commonplace in the quality improvement literature, but
does not contribute to gaining entry points for statistical application.
The standard is here, and here to stay, so why debate whether it should
be? Would it not be more productive to examine what it contains of a statistical
nature and to use that as an entry point for statistical application,
while still arguing for further developments?
In fact, as pointed out in the discussion of Stickley
and Winterbottom (1994) by us and others, ISO 9000 does not potentially
contain substantive statistical requirements! Clause 4.20 now reads as
follows (British Standards Institution, 1994):
4.20 Statistical Techniques
4.20.1 Identification of need
The supplier shall identify the need for statistical techniques required
for establishing, controlling and verifying process capability and product
characteristics.
4.20.2 Procedures
The supplier shall establish and maintain documented procedures to implement
and control the application of the statistical techniques identified in
4.20.1.
This requirement for the establishment and maintenance of process capability
provided a wonderful opportunity for statisticians to extend the application
of statistics in industry, making use of the enormous growth of the number
of companies and other organisations which are certified to ISO 9000.
It must be said that following the 1994 revision to ISO 9000 the accredited
certification bodies have been slow to realise the full implications of
this clause and companies can and do evade the statistical requirements.
The new version if ISO 9000 – ISO 9000:2000 – places even
more emphasis on measurement based improvement but probably sensibly does
not mention statistics.
As well as ISO 9000 itself, there are an increasing number of sector-specific
standards which are based on ISO 9000 but go further in terms of statistical
requirements. Two examples are QS 9000 systems for automotive component
manufacturers, promoted by Chrysler Corporation, Ford Motor Company and
General Motors Corporation (1995), and the technology and process approval
procedures for the Cenelec Electronic Component Committee 90,000 system
for electronic component manufacturing.
More interestingly perhaps are the implications of the EFQM Excellence
Model for the use of statistics. This is a nine-criteria, 32 sub-criteria,
basis for organisational self-assessment that is increasingly being used
in business, industry and the public sector. The Model requires process
analysis and measurement, trend analysis, benchmarking and customer and
employee perception surveys. In the USA the Malcolm Baldrige National
Quality Award Model has a longer history and has already had a major effect
on the introduction of good statistical practice.
4. Reliability
At present, although not readily acceptable to all engineers, much
of the application of statistics in engineering reliability is through
the use of exploratory methods to show the data structures which appear
in test and field failure data. The work of Walls and Bendell (1995) in
exploratory data analysis, Bayesian methods (Bunday, 1991) and counting
processes (Fleming and Harrington, 1991; Thompson, 1988; Crowder et al,
1991; Ansell and Phillips, 1989), which includes proportional intensities
(Lawless, 1987), additive hazards (Pijnenberg, 1991), proportional hazards
(Cox, 1972) and generalised linear model (McCullagh and Nelder, 1989),
provides approaches which have aided the engineer or manager in understanding
the possible causes of failure in systems. Examples of these analyses
have been carried out by Drury et al (1987), Kumar and Klefsjo, Lawless
(1987) and Wightman and Bendell (1985, 1995).
However, as systems become more reliable because of improving technology,
there will be an increasing paucity in data to be analysed; hence the
approach to statistics must change to more diagnostic analysis within
the systems development process, Feed-back of statistical results back
into systems design is like shutting the door after the horse has bolted
because the implication is that, if a statistical analysis on failure
data has been carried out, the systems were unreliable anyway!
There must be a movement away from statistical reliability requirements
within system specifications that are difficult for engineers and managers
to interpret and cost. For instance, instead of asking for a mean time
between failures figure of 1,000 hours, a statement should be made about
the failure-free operation period (Knowles, 1996). Thus instead of laboriously
calculating some failure rates from a standard such as MIL-HDBK-217, scientific
method should be used to determine and reduce the causes of in-service
failure in the design stage. This will supplement the total quality management
philosophy of failure prevention that currently utilises techniques such
as Pugh design selection criteria, quality function deployment (QFD),
failure modes, effects and criticality analysis (FMECA), design review
and designed experimentation (Clausing, 1994; Rommel et al, 1996).
The cost implications of reliability (or lack of it) are not fully understood.
The identification and removal of failures at the design stage requires
a good customer-supplier interface, reliable internal communications and
a quick turnaround of information. There is a need to develop a statistically
based management approach to reduce variability (Baer and Dey, 1989) and
an initiative to train staff to think of variability reduction not only
to business and industry but also to the statistical community at large
involved in teaching undergraduate engineers and business students. This
approach requires a detailed knowledge of total quality management and
elements of statistics such as causes of bias, effects of strong interaction,
stratification, correlation and the scientific method to highlight the
difference between confounding, common response and causality.
Some problems from not finding failures sufficiently early arise for the
following reasons: the product configuration has been decided without
the input from the reliability engineer or statistician, a lack of understanding
of the use of redundant versus highly reliable systems; hazards have not
been identified sufficiently early for preventive action (there is a lack
of emphasis in using reliability techniques such as FMECA rather than
after-the-fact testing such as Weibull analysis); reliability has been
insufficiently costed into the contract (how much does it cost to run
a sequential probability ratio test (SPRT) given that the time to completion
is not fixed?); reliability demonstrations have been run before all design
problems have been removed. The papers by Bain and Engelhardt (1982) for
reliability growth SPRT and Harter and Moore (1976) and Vujanovic (1994)
for Weibull distribution SPRT have not been used in industry to alleviate
these demonstration problems in the author’s experience. Add to
this list that the manager has not allocated sufficient funds to reliability
activities, the reliability activities have started too late, repeated
activities and important information has not been disseminated, and you
can see why the reliability engineering community is not as well respected
as other engineering disciplines. The main problem is that statistics
on these problems are not available and so possible solutions are not
addressed.
Training in risk management techniques is greatly important for a prospective
manager as an understanding of the failure concerns of an experienced
reliability engineer or statistician must be taken into consideration
(see the commentary on the Challenger disaster in Feynman (1989)). This
training required in statistics and risk inevitably must come about from
the cost and legal implications of product recall, reduction in product
development time, liability claims and Government and company directives
such as the management of health and safety at work (MHSW) regulations
(Health and Safety commission, 1992), QS 9000 and BS EN ISO 9001 which
companies do have to comply with.
The standard practice of making equipment suppliers know that reliability
is important to a customer is to include some statement of reliability
in the specification or contract. This statement can be wide ranging and
may include various tasks to be carried out by the supplier such as reliability
prediction, FMECA, fault tree analysis and/or reliability demonstration.
Many small suppliers do not have the knowledge or expertise to carry out
these tasks, let alone to cost them into a contract. In the main, lip-service
is still paid to reliability as can be seen in many company brochures
or marketing pamphlets with no effort to quantify statements, such as
“We offer a highly reliable service”.
Large companies are specifying reliability statements into contracts now
because of bad experience of unreliable subsystems in the past. However,
unreliable subsystems usually come to the notice of a customer when the
system is well into use. The problem is that, unless there is a commitment
on the behalf of the supplier’s management to use the reliability
tasks to their advantage, they will only do the bare minimum to satisfy
the customer requirements. In the same way that the Deming and Shewhart
philosophies have been watered down by companies to reduce costs in the
short term, so also have reliability activities.
The approach of automobile manufacturers and other large manufacturing
companies to make all suppliers satisfy QS 9000 or BS EN ISO 9001 is the
first step to improving supplier management commitment and hence reliability
because all the reliability techniques such as failure reporting and corrective
action systems and reliability development testing (military standard
MIL-HDBK-338; US Army Communications Research and Development Command
(1984)) require a well-documented quality system as a prerequisite.
QS 9000 is a good start for suppliers to learn and use
techniques such as QFD and FMECA. However, suppliers having the knowledge
of a technique and their proper use of that technique are very different.
The use of FMECA, for example, in the automotive sector does not suggest
that things will improve in the future. The main problems lie in the fact
that it is applied too late to have much effect, the risk priority numbers
are manipulated to reduce preventive action costs not to implement preventive
actions at the design stage, the method is not managed well, the technique
is too time consuming and it requires a detailed knowledge of not only
how a product works but how it fails as well, something that only experienced
engineers will know about the prospective product.
One solution that various Japanese automobile manufacturers use is to
have a resident engineer on the supplier’s premises to deal with
problems as they arise. Two other solutions are to tie in the FMECA with
the requirements of the MHSW regulations 1992 since every company now
must legally carry out a risk assessment (Health and Safety Commission,
1992) and use the technique as a prerequisite for design experimentation
to reduce variability and incorporate life-cycle cost considerations into
the FMECA format. The approach of making the risk priority numbers (or
developing a risk approach) more robust against misuse has not yet been
considered in the research literature. Larger companies should incorporate
FMECA usage within their quality, environmental and safety and health
strategy as hazard analysis critical control point, product, process and
design FMECA, and the risk identification procedures specified in Croner
Publications (1998) to meet the MHSW regulations are all similar in format.
Some of the areas where statistics and statisticians can provide improvements
in the reliability field are given below.
All statistical reliability requirements which are put forward by major
companies should be addressed by a common standardised approach with the
cost implications. This will reduce the use in specifications of misleading
terms such as failure rate, availability and even the definition of failure.
Reliability prediction using standard databases such as MIL-HDBK-217 should
be discouraged as the analysis should only be used as a basis for comparing
supplier specifications. However, the use of component reliability models
and derating criteria for components listed in MIL-HDBK-217 should be
encouraged as they are extremely useful in determining and reducing the
causes of failure. Databases of in-service failure data are useful for
determining the actual life characteristic of components and more work
is required in this area (Wightman and Bendell, 1994; Landers and Kolarik,
1987).
Failure reporting and corrective action systems are usually incomplete
and do not usually run well. This is a possible growth area for teaching
statisticians as well-run, well-documented failure reporting and corrective
systems with the prevention action approach of Shewhart will save companies
the cost of failure in service use and will put the statistics into context
with engineering graduates.
All testing is expensive and in many cases irrelevant (Nelson, 1995) as
it does not reflect in-service use. Bayesian techniques incorporating
test results from different tests, less emphasis on meeting requirements
and more on system improvement (exploratory statistical tools) and new
modelling approaches to the use of environmental stress screening and
burn-in are all possible research areas for system improvement.
The approach of demonstrating the reliability of a product is carried
out too late for some types of failure that occur. For instance, if the
failure is due to some design problem on a power supply it may require
a complete redesign to eliminate the cause of failure. The management
of reliability at every stage of development should be audited to determine
where and why trade-offs were made, eg performance, cost, which tools
were used and how effective they were at improving reliability. Reliability
must be designed in by considering it at the requirements stage (Akao,
1990) by the use of QFD and then making the product more robust to its
intended environment (Clausing, 1994).
Tools such as FMECA, failure modes and effects analysis and fault tree
analysis will come to the fore before equipments are even built and more
statistical research input is required into the relationship between the
risk and the cost of hazardous events within these methods. The use of
the functional analysis system technique and reverse fault trees (Clausing,
1994; Fox, 1993) are techniques that lead to designed experimentation
for a possible optimum design solution. Not enough work has been done
on integrating these tools with designed experiments.
So far as designed experiments are concerned, engineers still use ad hoc
methods to determine optimum solutions in finite element analysis, electronic
circuit simulation, automatic dynamic analysis of mechanical systems and
parametric feature-based solid modelling (Bigelow, 1995). The application
of designed experiments with simulation in computer-aided design or computer-aided
engineering needs to be addressed with the aid of statistical computer
software to aide the designers to find the most robust product solution
(Grove, 1997). The work of Quinlan (1987) and Logothetis and Wynn (1989),
pages 334-345, provides approaches for optimising a design using designed
experiments applied to finite element analysis and electronic circuit
simulation respectively. At a recent validation of the Masters’
course in engineering at The Nottingham Trent University, a module on
designed experiments and simulation was specifically required by a major
automotive supplier. There needs to be more work in this area.
The future of reliability engineering lies in the hands of those companies
who can integrate their design activities with their failure prevention
activities and statisticians need to become involved in these activities.
To quote O’Connor (1991):
“Recognise that high quality and reliability are
achieved by good management, in the widest sense. Good management of engineering
includes paying attention to excellence in design and production, adopting
a totally integrated product directed team approach, and a commitment
to training at a level that is far in excess of that currently practised
by western companies. This training must include a thorough grounding
in appropriate industrial statistical methods and applications.”
He adds:
“eliminate all methods that distract from the
pursuit of excellence. Statisticians working in the Q&R field should
review their work against this criterion.”
5. The future
Much of this paper has been about the mismatch between the wants
and needs of industry and the statistician’s view. The former want
simplicity and parsimony whereas the latter, at least sometimes, wants
academic respectability, interest and hence complexity of these needs.
There has been much recent debate in the profession about exactly which
quality concepts and tools an engineer needs. A recent investigation by
the Engineering Quality Forum on behalf of the Engineering Institutions
(Cullen et al, 1997) reported that an undergraduate degree course should
provide all engineers with a basic grounding in a broad range of quality
issues, but this teaching should be integrated with the engineering disciplines
rather than be taught as specialist stand-alone modules.
The survey also found that currently practising engineers would benefit
from a broad quality education primarily focused on addressing business
and organisational issues and directed at improving the management and
implementation capability. Overall the findings of this survey strongly
suggest that existing education does not deliver such quality tools and
techniques in a manner which facilitates their application.
In the business community the need for classical statistics has been replaced
by a need for the simple application of tools and problem solving techniques.
This is in line with Ishikawa’s ranking of statistical methods (Ishikawa,
1985):
a) Elementary – every worker knows how to use
the seven tools of quality control (Ishikawa, 1976).
b) Intermediate – managers additionally have knowledge of the seven
new tools (Mizuno, 1988), theory of sampling surveys, statistical hypothesis
testing, methods of sensory testing and basic design of experiments.
c) Advanced – the elite few have knowledge of advanced experimental
design techniques, multivariate analysis and operational research methods.
The need for computational skills has been made redundant
by advances in computing; indeed the worker at any level no longer needs
to know how to perform a hypothesis test but how to interpret the outcome
of it.
The result of this is that, although business still
has a need for statistics, it no longer in general has a need for statisticians.
In the USA, General Electric, Motorola and Milliken have created the role
of the “quantitative engineer” and have been very successful
in obtaining highly competent facilitators with statistical skills at
the engineer’s level. Such facilitators have implemented variation
reduction programmes as proposed by Deming and Ishikawa. Indeed the “six
sigma” programme originated by Motorola has often been undertaken
without statisticians being involved in the training, adding weight to
the argument that, if you want a successful quality programme, keep the
statisticians away!
The role for specialist statisticians is at the “top
of the pyramid” for higher education, research and training. They
have proved that they have no part to play in mass education or use of
statistics. In discussing the US situation Hahn and Hoerl (1998) state
that the “moves towards pro-activity in statistical quality methods
have left the statisticians who have been singularly non-proactive behind”.
The same arguments raised in the USA apply to Europe
and the UK in particular. One of the positive developments in the UK has
been the involvement of the Royal Statistical Society with other bodies
investigating statistical methods in quality management and engineering,
eg the Engineering Quality Forum. However, those activities have been
insignificant when compared with the momentum over the past decade of
the entire quality movement. There is a need to research the developing
themes in quality to offer statistical support in the client’s own
language and without complexity.
How can we shape the future dimensions and role of quality
in engineering and management? Who should the players be? The older generation
of statisticians (and it is apparently an ageing profession) will not
find this erosion of academic and methodological rigour acceptable; we
need to attract a new generation of young statistical activists, but where
will we find them or how shall we create them?
The Royal Statistical Society has a dearth of such competent
practitioners. We need to challenge a few practices about the roles of
both statistics and statisticians. There is a need to create a new kind
of education programme within our universities leading to the emergence
of the competent qualified quantitative facilitator who truly understands
the nature of variation. Perhaps now is finally the time to drop statistics
(or at least the statistician) from our quality vocabulary.
Juran (1994) has forecasted that the 21st century will
be referred to by future historians as the “Century of Quality”.
Quality improvement will be dependent on the application of statistical
techniques irrespective of who facilitates this implementation.
This paper is based on the paper by Bendell
et al in the Statistician (1999) 48, p299-326
Tony Bendell is the Managing Director of Services
Ltd. As one of the three Professors of Quality Management in the
country, he is a leading national and international expert on Service
Quality and its measurement, particularly in the public sector. Professor
Bendell has worked with many clients in this area including the UK
Department of Trade and Industry, various police forces, Local Authorities,
and Departments of the Indian and Dubai Governments. Tony is also,
funded by Rolls Royce plc, a Professor of Quality and Reliability
Management at the University of Leicester Management Centre. |
top of page |
|