Statistical Process Control (SPC) is a methodology or procedure of quality control
statistical methods. SPC measures and controls the quality of manufacturing processes, by using control charts, calculating statistical indicators, and taking improvement steps if necessary.
By other words, SPC:
- Measures (measurable characteristics).
- Calculates (indicators like mean, spread, process capability, etc.).
- Assesses and Improves (to leave the process as it is, or intervene if needed).
The principles of SPC were developed by Dr. Walter Andrew Shewhart. He published his book "Economic Control
of Quality of Manufactured Product" in 1931. This was the time, when quality control stepped to the next level, using statistical methods and
analysis. Later on, the methodologies of SPC were improved further by Dr. William Edwards Deming.
As we can see in the title of Mr. Shewhart’s book, the intention of statistical control was to create an
economically viable procedure, that keeps reliability of testing on a high level, but decreases cost.
Having SPC used in almost all industrial sectors, it has become the foundation stone of industrial quality
control. Conducting SPC in a manufacturing environment has several major benefits, for example:
- Detects process changes, and fluctuations.
- Indirectly improves efficiency by reducing process variation.
- Reduces scrap, waste and cost.
- Gives inputs for intervention (what takes the highest impact on our process, and where should we improve).
- Serves as indication for process health (if we are on the right track).
The general intention of SPC is to continuously measure the capability and conformance of our manufacturing
processes (influenced by random effects), and to screen out uncontrolled external effects, that take negative impact on the process. To go deeper into SPC, we have to check some basic terms first:
Data set: the measured values of a given characteristic of random samples.
Control chart: used for containing, visualizing and evaluating the measured values of a given characteristic (e.g. voltage, current, distance, etc.), compared to the
Stability of process: in statistical process control, stability means low variation. A stable process is precise with low variance, and the measured characteristics are
in an acceptable range.
Variance / Variation (Ϭ2): is calculated by taking the differences between each value of the data set and the mean, squaring the differences and dividing the
sum of the squares by the number of values in the data set.
Standard deviation (Ϭ - sigma): is equal to the square root of variance.
Common cause variation: we call those causes a "common cause" that bring natural, inherent and predictable variation into the process (noise), but only with a small
influence. The variation caused by common causes is predictable (based on past experience), and called "controlled variation", or "chance cause" according to Dr. Shewhart. Common cause factors (X's) are the inputs
with low impact.
Special cause variation: uncontrolled variation is caused by one or more non-random "special cause", which is more than a consistent noise, and influences our process with
extraordinary weight. Special cause variation is unpredictable and chaotic. Dr. Shewhart named it as "assignable variation", while Dr. Deming called it "special cause". Special cause factors (X's) are those few
inputs that have high impact on the process results (Y's).
Common and special cause variation (Source: qMindset.com; deming.org)
Capability of process: the process is capable, if all requirements (based on specification) are met.
Upper Specification Limit (USL): the highest permissible value of a given characteristic.
Lower Specification Limit (LSL): the lowest permissible value of a given characteristic.
Tolerance range (T): the distance between USL and LSL (T = USL – LSL).
Upper control limit (UCL): the limit, which triggers action if exceeded upwards.
Lower control limit (LCL): the limit, which triggers action if exceeded downwards.
Spread: the quantity (amount) of variability in a given population.
Mean / Average: the arithmetic mean of the data set.
Median: the middle value of the data set.
Normal distribution: the probability density of normal distribution is as follows:
The normal distribution (Source: qMindset.com; Gaussian distribution by Johann Carl Friedrich Gauss)
The most important points are to keep variance as low as possible, and to be in the middle of the tolerance
range. On top of that, we have to note, that stability or capability by itself is not enough. The process needs to have both to produce proper
characteristics consistently. Features of a stable and capable process:
- Mean is at the nominal specification, with very low variance.
- No typical trend is visible (no systematic variation in the means).
- Sample variation and the variation of the total population are not different significantly.
Let’s take some examples (see chart):
Drunken bowman: he shots on the target randomly, the variation of his shots is high, and on top of that, the
location of the shots is incorrect, as he barely hits the board. He is neither capable, nor stable.
Lucky drunken bowman: although the variation of his shots is still high, somehow he hits the board every time.
He is capable, but not stable.
Skilled bowman with wrong glasses: his skill is superb, his hits are very near to each other, but because of
the distorting lens in his glasses, the shots systematically go to the upper right side of the board. He is stable, but not capable.
Robin Hood: with his perfect skill, he is able to systematically find the middle of the target. His shots
have very low variance, and the mean is exactly in the middle. He is both stable and capable.
The visualization of stability and capability (Source: qMindset.com)
One side of SPC is its development before starting a new manufacturing process (or changing an existing
one), while the other side is its execution.
Development - the definition of an SPC procedure contains the following steps and elements:
- Determination of product characteristics to be controlled (based on: customer requirements, FMEA, risk analysis, function,
safety relevance, reliability of the manufacturing technology).
- Definition of capable measurement or testing method and technology (must be capable to measure, visualize and to be handled).
- Testing / verification of measurement system.
- Selection of control charts and calculations to be used.
- Definition of control intervals (time-frames, number of samples to be measured, frequency of sampling).
- Stipulation of control limits, and intervention points.
Execution - a pre-defined statistical process control consist of the following steps (statistical software
perform some of these points automatically):
- Measurement of random or pre-selected samples.
- Registering of values on the control chart.
- Evaluation of values compared to the specification (upper specification limit, lower specification limit).
- Calculation of spread, means, variance and standard deviation.
- Analysis of trends (continuously increasing / decreasing values, periodic peaks, etc.).
- Calculation of process capability (Cp, Cpk) or process performance (Pp, Ppk).
- Comparison of sub-groups with statistical tests.
Measuring, registering and comparing the given characteristic of random samples is the most simple type
of SPC, while more complex methods exist, such as: frequent calculation and comparison of means, spreads, variances, process capability
) and performing statistical tests (e.g. F-test
, Z-test, Chi-Square, etc.).
Before starting and sustaining a proper SPC, the measurement method must be defined. There is no SPC (and capability study) without a capable measurement. SPC is in very close
conjunction with Process Capability (Cp
) and Measurement System Analysis (MSA)
Use systematic process capability studies (as SPC) to continuously verify your process. It can save you
from tons of scrap / rework / customer claim.
During the process development phase in the APQP
framework, the control / testing concept of the process must be
determined. In various cases, such as safety relevance, or governmental relevance, the process control must be performed with 100% testing instead of using random sampling and SPC. SPC can never reach the
confidence level of 100% testing.
Implement and use statistical software that automatically calculates process indicators, and alerts you
in case of negative trends and discrepancies are detected. It not only helps quality control, but replace the necessity of manual calculations,
that take time and money.
If you cannot measure an important characteristic directly, you have to find another measurable
characteristic that is in close connection (relationship) to the original characteristic.
When the SPC concept of a given process is being developed, choosing a proper sampling interval
(the time between two samples) is important. The lower your time interval is, the higher your chance is to detect anomalies (and to
reduce waste, scrap, rework).
Statistical Process Control (SPC) is a methodology or procedure of quality control based on statistical