Do You Use SPC Correctly? | Ask The Expert

August 18, 2023
12 min read
General

When something goes wrong, we naturally react. If a product fails or a process performs inadequately, we attempt to discover what’s wrong so we can fix it.

That sounds reasonable enough. But by reacting to the problem, we’re allowing that process or product failure to control our behavior. Isn’t it more effective to understand and control the process rather than allowing the process to control us?

That’s where Statistical Process Control (SPC) steps in. SPC is a set of techniques that provides a superior understanding about how a process behaves. In order to implement SPC, data must be collected over time.

Taking Charge with SPC

When monitoring data, you expect variation-even when nothing out of the ordinary occurs. Common cause sources of variation represent the inherent and natural level of variation affecting a process.

WinSPC Means Lower Costs and Higher Quality

WinSPC is software to help manufacturers create the highest quality product for the lowest possible cost. You can learn more here.

But if data constantly fluctuates, how do you know whether something unusual has happened? SPC answers that question.

First, to understand whether the information varies in a predicted or unexpected way, you must understand the expected degree of system variation. Once you know the expected level of variability, you can identify whether your observations exceed the expected amount.

With SPC, the idea is to view a stable process long enough to understand the level of inherent variation. Using that information, you can compute limits of expected variation, otherwise known as “control limits.” To be viable, these control limits must be computed and based upon data originating from a stable process.

When a statistic exceeds a control limit or unexpected patterns are observed, then you have evidence that a special cause source of variation has entered the system. These special cause sources of variation, otherwise known as “assignable causes” result in unexpected changes in the process. This doesn’t imply that the parts under production have exceeded specification limits-only that something unusual has entered the system.

Without SPC, these issues aren’t visible until product measurements exceed specification limits – if they are found at all. Unfortunately, by then it’s too late. Bad products have already been produced. Moreover, the company has invested time and resources producing an inferior product, further eroding profitability. Moreover, discovering the root cause is more difficult at that point.

SPC not only provides the opportunity to identify unusual behavior before unacceptable products are produced, it allows you to determine when something unexpected occurred. Sadly, most companies don’t take advantage of SPC’s benefits.

To understand just how misleading data can be when viewed improperly, let’s consider a sample of measurements. Using a histogram, you can see how the measurements stack up relative to the specification limits. So, by looking at the graph, what can you conclude about the process? Note that USL and LSL represent the upper and lower specification limits, respectively.

While it’s tempting to conclude that things are OK, and that the process is approximately centered between the specification limits, the answer to the question is “nothing.” Because we have not viewed the data over time, we can’t draw conclusions regarding the process.

The data shown in the following graphic illustrates exactly the same data as depicted in the histogram above.

This graphic depicts the same data collected over time.

Viewing this data over time provides a more accurate description than the histogram. Here, the histogram misleads one to believe the information comes from a single distribution. However, this process has been increasing over time, so no single distribution actually describes the data. The distribution continues to shift upward over time.

Unfortunately, many manufacturers don’t require evidence of process stability from their suppliers (or sometimes of themselves). Without that information, we not only discover the problem too late, but it’s more difficult to determine the cause.

Being In Control

Taking measurements over time helps manufacturers better understand whether their processes are stable or what is termed “in control.”

But what exactly does that mean? Let’s say Company ABC produces spring clips. Knowing that the radius of a spring clip is an important dimension, the manufacturer measured the radius of spring clips produced over a few minutes, getting results such as: 3.24, 3.28, 3.25, 3.31, 3.28, 3.22, 3.25, 3.26, 3.28, 3.24 and 3.23. Eventually, the data stacked up to form the following distribution.

Now if the radius data was collected on another day, you shouldn’t expect to see the exact same radius values. But the distribution of the data should remain nearly the same.

When you spot the same distribution repeatedly over time, the process is said to be “in control.”

The radius data on the left represents the first distribution observed. When monitoring the process again at future points in time, the same distribution pattern emerges. This process is stable or “in control.”

Out of Control

It’s risky to assume that these radius dimensions will always follow the same distribution. After all, numerous factors such as material properties, machine settings, and environmental conditions will affect dimensions. But many manufacturers believe that being “in control” isn’t important as long as the product meets its specification limits.

That belief has detrimental repercussions for American businesses striving to achieve quality and efficiency. To illustrate, let’s look at what happens when a product characteristic meets specifications, but isn’t in control.

This represents a product that meets both Upper Specification Limit (USL) and Lower Specification Limit (LSL) but is not stable.

Once customers receive units that follow a particular distribution, they expect to see the same distribution again. Customers like consistency. If they suddenly receive a different distribution, they are usually disappointed with the perceived lack of quality. At least, they do not expect the change.

The problem becomes magnified if these distributions represent an important dimension for pieces that will be mated together. Units from one distribution will fit together differently than units from a different distribution. So you can’t expect products to perform consistently. The varying performance means less predictable failure times and less predictable customer responses. Compare the following graphic to the previous one.

Which one do you think customers would prefer? Knowing they prefer consistency, the second graphic is the clear choice even though both depict products that fall within specification limits.

We have seen American automotive companies spend much more time and money dealing with warranty issues compared to several of their Japanese counterparts. The inability to effectively reduce variation and a lack of responsibility in ensuring dimensional stability have much to do with it.

When the process distribution changes, no one, including the manufacturer and customer, knows what to expect. That’s not to say that every change is bad. Some unexpected changes might represent an improvement, but unless we appreciate that an improvement was made, we won’t sustain that progress.

Of course, many people who work in a manufacturing environment still don’t believe that processes change. They’re wrong. Processes can change due to a variety of factors including changes in supplied parts, temperature, humidity, worn tools, or changes in personnel.

When X Doesn’t Mark the Spot

Xbar and R charts are routinely used to monitor change. The Xbar chart helps to detect changes in the process average, while the R chart is designed to determine changes in process variability. When properly used, these charts can be effective indicators of process behavior as well as a tool to predict quality improvement or decline.

Unfortunately, most American manufacturers don’t use these charts correctly.

To understand why, let’s look at the mechanics behind the Xbar chart. It’s commonly written as an X with a bar over it – a symbol that denotes an average or mean.

Typically, a production operator will take a few measurements over a period of time. The operator averages those measurements and places the results on an Xbar chart. Then the range – the maximum data point minus the minimum data point – is computed and placed on the R chart.

Control limits are computed. These describe the expected amount of variation among the averages (on the Xbar chart) and ranges (on the R chart) as long as the process remains in control. By design, control limits should capture about 99.7% of the dots on the chart when the process is stable.

These are sample Xbar and R charts. The average of averages and the average Range are solid bold lines, while control limits (UCL, LCL) are indicated by dashed lines.

The previous graph shows that averages are expected to randomly fluctuate between 77.7 and 82.3 about 99.7% of the time. Ranges should randomly fluctuate between 0 and 8.5, when the process is stable.

Now examine the following Xbar chart.

Both the control and specification limits are shown. So if the process remains stable, would you expect that the characteristic being plotted will be within the specification limits most of the time? Many would say yes, but this is an erroneous conclusion, and the consequences may be severe.

Remember, you’re not looking at individual measurements being plotted on the chart. You’re looking at averages. Control limits suggest that if the process is stable the averages will remain within those limits 99.7% of the time. But don’t forget that specification limits apply to individual measurements, not averages.

This graphic uses an “X” to illustrate the individual measurements for a few of the averages. The six individual measurements that create the average are widely scattered. Averages always possess less variation than individual measurements.

Nothing in Common

Specification limits never belong on control charts. While some quality professionals believe that control charts indicate the ability to meet specifications (process capability), that’s completely untrue. Control charts were invented to serve one purpose – to identify process changes as quickly as possible after the change occurs. They do nothing more and nothing less.

So why look at averages if they are so misleading that they can’t indicate whether parts conform to specification limits? Well, there are two compelling reasons to do so.

First, and most importantly, averages, unlike individual measurements, are sensitive to detecting process shifts more quickly-which is the purpose for implementing SPC. This assumes the appropriate sample size has been determined. Second, averages from a stable process tend to follow a Normal distribution, so it’s easy to estimate control limits for averages. Contrary to popular belief, individual measurements typically do not follow a Normal distribution-even when a process is stable.

But specification limits do not belong on control charts. Processes that are “in control” do not necessarily produce parts within specification. Moreover, production of “acceptable” parts doesn’t imply that processes are stable.

More Mistakes

There are numerous fundamental errors typically made in applying SPC, and this article has touched on only a few. All the mistakes result in misjudging reality.

Additional common mistakes include improper sampling methods. The method in which physical samples are taken is critical. Furthermore, the type of control chart used depends on the type of sampling scheme used. There are instances where rational sampling is required and instances where we can violate that method as long as appropriate SPC methods accompany the sampling plan.

Inappropriate sample sizes are nearly always used. While a sample size of 5 may determine some process changes, it will not determine others very quickly. The most appropriate sample size depends on the application and the amount of change deemed critical to detect.

The use of individual measurements for control charting should only be used in certain situations, and when using individual measurements, several issues such as the chart’s ability to detect important changes must be evaluated. Often, charts such as CUSUM and EWMA are effective on individual measurements because they don’t depend heavily on the distribution of individuals and they can be designed with varying degrees of sensitivity to detect important changes.

In many applications, variation should be decomposed so that we understand the variation within a sample (Range Within charts) and the variation between samples (Range Between charts). Essentially, there are at least two significantly different sources of variation in the system. Many common production methods introduce multiple sources of variation, and traditional Xbar and R charts are misleading in these cases.

Many applications require standardized charts which account for differences in sample sizes or different weighting schemes.

The use of capability indices such as Cp, Cpk, Pp, and Ppk is almost always misleading. First, analysts try to assess process capability without ever establishing process stability first. Next, many data analysts assume the data follows a Normal distribution when it usually does not, so gross errors in estimates are made unless the non-Normal data is properly analyzed. Simple and superior methods for assessing capability are available, yet they are rarely used. For example, focusing on the actual data distribution, its standard deviation, its median, and the proportion of parts exceeding specification limits would serve us much better.

These mistakes are serious in the sense that they result in erroneous conclusions, manufacturing problems, premature product failures, dissatisfied customers, high warranty costs, and even product liability suits.

The Upshot

Proper application of SPC aids in our understanding of system variation and indicates when that variation increases or decreases. This knowledge puts you – not your system – in control.

Allise Wachs, PhD
Integral Concepts, Inc.

Integral Concepts provides consulting services and training in the application of quantitative methods to understand, predict, and optimize product designs, manufacturing operations, and product reliability. www.integral-concepts.com

About DataNet Quality Systems

DataNet Quality Systems empowers manufacturers to improve products, processes, and profitability through real-time statistical software solutions. The company’s vision is to deliver trusted and capable technology solutions that allow manufacturers to create the highest quality product for the lowest possible cost. DataNet’s flagship product, WinSPC, provides statistical decision-making at the point of production and delivers real-time, actionable information to where it is needed most. With over 2500 customers worldwide and distributors across the globe, DataNet is dedicated to delivering a high level of customer service and support, shop-floor expertise, and training in the areas of Continuous Improvement, Six Sigma, and Lean Manufacturing services.