THE FUTURE OF INDUSTRIAL STATISTICS

The field of industrial statistics traces its origins from the early part of the previous century. The last 20 years have seen significant advances in both applications of the tools of industrial statistics and in the development of new methodology. The use of statistics in industry, including applications in new product design and development, optimization, and control of manufacturing processes, and in service industries continues to expand at a rapid rate. Consequently, the field of industrial statistics has emerged as an important branch of modern statistical science. These growing applications of statistics in industry have created opportunities and needs for methodological research, and many new challenges have arisen. This paper identifies some areas in which statisticians may contribute to the methodology of industrial statistics and discusses some of the educational and implementation challenges that we confront today.


INTRODUCTION
Much of the progress in statistical science in the last century came about because statisticians addressed real problems. Many of these problems were from industry, the chemical and physical sciences, and engineering. For example, Walter Shewhart developed control charts from his interaction with Western Electric personnel grappling with the problems of manufacturing telephone equipment, George Box developed response surface methodology from his work with chemists and chemical engineers, and John Tukey developed exploratory data analysis from his work with telecommunications engineers. The last 20 years has seen the field that we call industrial statistics mature into a strong and active specialty of statistical science. Several academic programs are focused at least in part on industrial statistics and graduates of these programs are finding employment opportunities in a variety of industrial and business settings. Research in industrial statistics is perhaps at all-time high, if one uses the number of journal articles published in the area as a criterion, and the fact that several new journals that focus primarily on industrial statistics have appeared.
As applications of statistics in industry have expanded, many new research avenues have opened. Some of these problems present opportunities for statisticians to contribute to the solutions of real industrial problems while simultaneously advancing the frontiers of statistical science. This paper summarizes some research opportunities and challenges for modern industrial statisticians and offers some on the present state of the field.

A BRIEF HISTORICAL PERSPECTIVE ON INDUSTRIAL STATISTICS
The origins of modern industrial statistics are in the early 1900s. Table 1 [adapted from Montgomery (2001a, b)] is a summary of some of the milestone events.  There were many important developments in industrial statistics during the 1950s and 1960s.
As noted previously, the introduction of response surface methodology [Box and Wilson

SOME RESEARCH OPPORTUNITIES IN INDUSTRIAL STATISTICS
Statistical quality control has been called one of the great technological innovations of the 20 th century. It can be widely applied and it is extremely effective, at least in part because the underlying theory is simple and the basic methods are straightforward. However, as pointed out in the recent panel discussion on statistical process monitoring and control [Montgomery and Woodall (1997)], there are few areas in statistics where the gap between methodological development and application is so wide. In many industrial settings, the only technique employed for process monitoring is the original Shewhart control chart. This is disturbing, because it means that many industrial organizations have not benefited from the very useful technical advances in these methods that have occurred over the last 50 years. Woodall Typically, control charts are applied to complex or high-value-add processes, as these represent the greatest opportunity for improvement in many industries. When a control chart signals, the analyst must determine which parameters of the monitored process has changed and characterize the assignable cause. Pinpointing the time at which the assignable cause has occurred is usually an essential part of these activities. This is the problem of determining a changepoint, for which there is a substantial literature [see Basseville and Nikiforov (1993)]; however, little of this work has found its way into process monitoring applications. Bayesian methods, time series modeling and analysis techniques, expert systems, and stochastic calculus are techniques that may be useful in this area. For more background and a number of useful references, see Lai (1995), Stoumbos (1999), and Yashchin (1997).
Control charts are usually applied in a monitoring mode, and adjustments are made to the process only after an assignable cause is isolated. Over time, this sequence of monitoring, diagnosis, and correction activities will reduce process variability. The control chart arose in the discrete parts industry and its statistical framework is hypothesis testing. An alternative approach that has enjoyed great success in the chemical and process industries is engineering process control (EPC). In a typical EPC system, the process output is observed and based on the current deviation of the output deviation from a desired target, an adjustment is made to a manipulatible input variable that is designed to bring the output closer to target in subsequent periods. Popular EPC schemes include integral control and proportional-integral control. are not directly identified and removed, but are adjusted out. There has been some work integrating these two approaches to reducing process variability, [see Box and Luceño (1997 Designing products and processes so that they are robust to variables that are difficult or impossible to control continues to be an important application area in industry for DOX.
Response surface methods offer a modern, efficient, and highly effective approach to these problems. Furthermore, simple, highly effective implementations of these procedures are now available in widely used experimental design software. There is an education mission here to ensure that these newer and highly effective approaches to robust design and process robustness studies are more fully integrated into engineering and statistical practice.
However, there are also significant research issues, including the use of more effective modeling and analysis techniques based on the generalized linear model, the integration of process/process robustness methods into computer experiments, and the development of more efficient experimental design strategies for product/process robustness studies. Myers (1999) gives an excellent overview of the current status and future direction for research in response surface methodology.
Many of our research opportunities will be driven by the complex and highly diverse set of conditions in which modern manufacturing processes operate. The industrial environment of today is often data-rich and highly automated. For example, in a modern semiconductor fabrication facility, each wafer may pass through several hundred processing steps, and at each step numerous measurements are typically made, sometimes on individual die features.
Several thousand wafers may pass through this facility weekly. This leads to a large database consisting of millions of records on hundreds of process variables and product characteristics.
In these environments there is a significant need to be able to both detect and diagnose patterns, changes, and problems in process performance. Furthermore, because of the rate of production and the economic consequences, this all needs to be done in real time, not the

SOME IMPLICATIONS FOR STATISTICAL EDUCATION
Because the use of statistical methods has expanded in the last 20 years, many universities now offer courses in statistical topics and methodology directed towards industrial applications. I would like to offer a few comments about the implications that industrial needs and trends have for the content of these courses.
Many statistical quality control courses are out of date. Some of the common failings of these courses include teaching that the Shewhart control chart (including individual and moving range charts) is the answer to everything, failure to thoroughly explain rational subgrouping, inadequate discussion of the impact of non-normality and autocorrelation in process data on control charts, failure to discuss various process models and the performance of control charts, and lack of effective integration of statistical process monitoring techniques with feedback adjustment and engineering process control. The education we offer both in universities and in industrial short courses and workshops needs to be enhanced and in some cases upgraded.
For example, simple multivariate methods need to be introduced in statistical quality control courses. As noted previously, the real world in which many applications of statistical process monitoring and control take place is extremely data-rich. Monitoring a complex multivariate process with a series of univariate control charts can result in situations where an assignable cause will not be detected on any of the individual control charts. Because many "natural" variables are used to describe a process, yet the "motion" in the process is actually in a subspace of these original variables, techniques such as principal components analysis and partial least squares often are a logical basis to consider for process monitoring.
There is an analogy from designed experiments that applies to these situations: using many univariate control charts in a multivariate process is equivalent to using a one-factor-at-a-time experiment. We certainly wouldn' t recommend one-factor-at-a-time experiments, and in many instances, we shouldn' t routinely recommend univariate control charts. Many multivariate techniques have promise, including multivariate generalizations of standard control charts, CART, and latent structure methods, yet they' re probably not used enough in practice. It is even more regrettable that most SPC courses do not encourage use of multivariate methods or even illustrate the potential value of these techniques. See MacGregor (1997) and Montgomery (1998) for more discussion.
Another important consideration in control chart usage is the type of variability exhibited by the process. Figure 1 presents data from three processes. Figure 1a and 1b illustrate stationary behavior. Shewhart defined an in-control process as one for which the behavior was predictable. Clearly both of these processes satisfy this definition. The data in Figure 1a is uncorrelated, and the in-control state is described nicely by the Shewhart model Conventional control charts often do not work well in such situations, yet this situation is rarely discussed in statistical quality control courses. Figure 1c illustrates nonstationary process behavior. This type of process behavior often occurs in the chemical and process industries. A simple model that often works well in these cases is the first-order integrated moving average model. In many industrial settings we deal with nonstationary behavior by using engineering process control (such as feedback control). This approach is required when there are factors that affect the process that cannot be stabilized, such as environmental variables or raw material properties. However, few statistical quality control courses mention feedback adjust and engineering process control, or the logical interface between these techniques. In many high-technology organizations the statistician is not viewed as a full team member in product and process design and development work. Furthermore, statisticians are not typically included in patent awards and design/development team recognition, and statisticians don' t lead yield enhancement activities. One cause of this is that many statisticians lack the background in hard science and engineering to make content contributions to industrial development projects; consequently, they may be regarded as little more than " data technicians" . This should provide additional impetus for statistics programs to encourage students to take more advanced courses in physics, chemistry and the engineering sciences (such as thermal and electrical science, fluid mechanics, materials science, and so forth) as electives possibly replacing courses in mathematics, and to consider forming joint degree programs with these departments. Statistics programs should also actively recruit students from engineering and the sciences and not just from mathematics. Possibly statistics faculty should hold joint appointments in an engineering, science or management department.
Often the industrial statistician is viewed as a " manufacturing" person. While there are certainly many important applications of statistics in the manufacturing setting, we must broaden this perspective into other key aspects of the business, such as information systems, supply chain management, research and development, and the " business" side of the business -accounting, finance, product marketing and distribution, for examples. Again, this points to the need for broader educational experiences for statistics students.
Certification and registration activities based on ISO 9000 (and in North America the QS 9000 standard developed by the Automotive Industry Action Group) have increased dramatically in recent years. The primary focus of these standards seems to be an attempt to document the existence of a quality assurance process without requiring any real evidence to ensure that quality improvement has actually occurred and is ongoing. In other words, the focus of these standards is only on the assurance aspects of quality, and not on the broader aspects of a quality system, which must include quality improvement and quality planning.
Furthermore, the statistical components requirements are extremely weak, and suggest or require the use of poor methods, such as the process potential ratios P p and P pk .
It is quite possible to be ISO/QS certified and to operate tools and processes that produce high rates of scrap and nonconforming product, and to ship this material to customers, just so long as the paperwork system documents the activity, and the corrective action. There usually isn' t any follow-up to ensure that the corrective action was really effective. The auditors, registrars, consultants typically have " modest" statistical backgrounds, and a typical certification audit focuses almost exclusively on paperwork and bookkeeping. plans for growth. While the assignable causes underlying these incidents have not been fully discovered, there is a clear and strong signal that a quality certification system that focuses only on the assurance side of quality systems is inadequate and ineffectual, and means little in terms of the actual delivered quality of products.
Industrial statisticians need to take a more active role in ensuring that quality system certification activities have a meaningful statistical component, that they have a strong focus on quality planning and quality improvement activities, and that real quality improvement is continuously occurring. Furthermore, auditors, consultants and registrars should have to demonstrate actual statistical competency. Often, too many of these individuals are knowledgeable primarily about auditing and quality assurance activities, and are unable to provide any real assessment of the effectiveness of the entire quality system or the quality and/or reliability of the products actually being manufactured.
The real tragedy of ISO/QS is that organizations have only so many resources that they can spend on quality system activities. Because a company is forced via ISO/QS to devote huge amounts of these resources to quality assurance, few if any resources are typically left to devote to quality planning and quality improvement. These are the components of the quality system where real changes in the products and services provided by the organization can occur. Reliance on ISO/QS certification alone to deliver reliable, safe and defect-free, high quality products to customers simply will not work. While there are many aspects of the six-sigma initiative that I find appealing, other aspects of the program cause some concern. I strongly believe that widespread education in industrial statistics will pay long-run dividends. However, I believe that much of the training provided in the typical six-sigma " black-belt" courses is too technique-oriented. Furthermore, because of a supply and demand issue, individuals with limited background and experience often deliver the training. This tends to produce people that " don' t know what they don' t know" .
Such training may produce some short-term results. Even the most basic of statistics skills will often allow an individual to obtain quick results because in most organizations prior application of statistical tools has been very limited, and as a result there is a lot of " lowhanging-fruit" on the tree. However, it is a long-term mistake to over rely entirely on training-based programs such as Six-sigma to provide the main source of statistical expertise in the organization. There are many industrial problems that require considerable professional statistical expertise and a solid base of engineering or scientific knowledge to solve. The typical six-sigma " black-belt" is not necessarily adequately equipped to solve these problems.
I am a strong advocate of professional certification and accreditation. In the engineering field in North America, academic programs are accredited by the Accreditation Board on Engineering and Technology, an independent authority not involved in delivering the education, and the professional registration examination is designed and administered by the National Society of Professional Engineers, another independent authority not involved in delivering education or accrediting the academic programs. I think an independently designed and administered certification/accreditation program for six-sigma black-belts would help ensure that graduates have mastered an appropriate body of methodology. Recall that Deming was vigorously opposed to relying on slogans and programs. Remember the fate of some of the other failed initiatives such as, " value engineering" , " zero defects" , " TQM" , and " quality is free" ? These programs failed, often because they became little more than company-wide training programs that were assumed to be successful when a certain percentage of all employees were " trained" . There was no emphasis on obtaining results and no accountability, either to or from senior management. Six-sigma probably has a better overall chance of success than these programs, because at least it' s based on some of the right fundamentals, it requires a project-based implementation, and at least at the moment it seems to have management involvement and commitment in organizations where it has proven successful.
However, the industrial statistics community needs to capitalize on the awareness and statistical thinking that Six-Sigma has created, and we need to seek out a broader leadership role in our organizations.CONCLUSION During the last two decades, industrial statistics has become an increasingly important field.
The 21 st century holds tremendous opportunity for statisticians, and especially industrial statisticians. However, the impact that we have on industry depends to a large extent to how well we can integrate statistical methodology with engineering and computer methodology, and how aggressive we are in modernizing our educational programs to take advantage of the opportunities. This includes modernizing education in statistical quality control, design of experiments, and reliability engineering, as well as educating statisticians more broadly in the sciences and engineering, and encouraging more interaction between statistics faculty and science, engineering and business faculty. Industrial statisticians must also find approaches to gain more input on the statistical components of certification and registration activities and to effectively support current industrial initiatives such as six-sigma. The research opportunities are also diverse and challenging, spanning many areas of statistical specialization. However, many of them will require interdisciplinary knowledge and perspective to successfully pursue.