Introduction
It was the engineer W. A. Shewhart who developed the principle of control charts in the years 1920. At that time, the management of the Hawthorne plant of the Western Electric Company in Chicago (American company for the production of telephone stations employing 46 000 people) wanted to achieve the uniformity of their products in order to retain their Customers^{1}. They set up the world’s first quality assurance department. The goal was noble and the idea was for them to take action as soon as the product came out of the norm.
However, they quickly realized that far from improving, things were only getting worse. They called Dr. Shewhart who worked at the newly created Bell Telephone Laboratories.
To minimize the risk by reducing the occurrence of errors, Shewhart created the control cards. Its principle of reflection is that it is necessary to distinguish the natural random variability from any process of accidental variability. It is not to verify that there is no variability in the processes, but to measure and control it. This is based on the fact that the variability cannot actually be completely eliminated.
Dr. Shewhart sums up the principle of control charts:
” taking a process into a state of” statistical control “where there are only natural variations, and keeping it under control, is necessary to predict the results and manage a process effectively. “
To distinguish ” normal ” variations from ” exceptional” variations in the process and answer questions:
Has my process changed?
Is my process stable and predictable?
The principle
- A Central Line that represents the average and identifies the ” location ” of the process to detect stalls and developments.
- An upper and lower limit which is placed at equidistance, 3 σ of the Central line, and which allows to measure the level of variability of the process.
Why are the limits at 3 σ?
First, statistically, 6 σ allows us to have 99.99967% chance to contain the expected values. In other words, we have only 0.000333% chance that the next value will not be in between. This is very small and allows us to say that if it is outside this limit, there must be an identifiable cause.
Nevertheless, it remains that it is an “arbitrary” value. The confidence that can be made and ultimately equal to their existence. This tool has been used for decades. And experience proves, day after day, graphic after graphic, that this choice is correct^{2}.
The limits of control of the voice of the client should not be associated with the search for problems or in the search for special causes. That’s why he used the 3 σ limits. More than 50 years of experience shows us that he was right. » W. E. Deming
Why are you talking about 1.5 shift?
As part of the control charts, and the DPMO are translated into Sigma’s value, we find in the formula a 1.5. Its origin is explained by the accuracy of the control cards. Indeed, in practice, a process respecting the 6 sigma in the short term may not be long-term, and therefore would be 4.5 sigma.
The 2 types of variations
Natural Variation due to common causes
Any process necessarily has natural variability (called ” Noise “), even tiny, which can never be removed. The whole issue is to reduce and control them. They are the result of cumulative effects of many small uncontrollable causes called common or random causes .
Some examples of common causes:
- organizational problems: lack of training, lack of supervision, communication problems, procedure not up to date,…
- Machine or product not suitable for the situation: poor design, poor condition, vibration, process not complying with the requirements, wrong adjustments…
- Poor working conditions: lack of lighting, vibrations, humidity, congested work area, ergonomic problems, supply problems…
It is also noted that, according to W. E. Deming, about 94% of the problems are in reality ” natural ” variations of the Processes^{3}. In other words, according to Deming, the majority of the problems are due to management problems and not to quality problems.
Accidental Variation due to special causes
The Accidental variation (” signal “) is an exceptional variation. These exceptional variations are due to special causes (assignable) and can be traced back to the source.
This source can be very diverse: human error, matter quality problem, breakdown… The whole issue is to identify them to eliminate them.
1. Choose control Settings
The parameters must be relevant enough to ensure that the processes are effectively piloted. There are different criteria of choice:
- Smplicity: The parameter must be easy to measure. Taking the levy should not take too long, and the staff must be able to do so.
- The importance to the customer: it will be better to choose a parameter that can have a direct impact on the quality for the customer.
- Centralization: In some cases, the parameters can be correlated. If the first one does not conform, then the others will not. It is best to choose the central parameter that allows the least sampling, while having the maximum information.
- History: There is little point in monitoring a parameter that has never been a problem for us. It will be more relevant to put under control a parameter that we know to be problematic.
- The data type: Statistically, it is always better to carry out a survey on quantitative data (a weight, a diameter…) rather than qualitative data (good, not good…). As much as possible, prefer this type of variable that allow the use of more efficient cards.
2. Set up the sampling mode
As in any data-survey procedure to perform statistics, the basic processes of sampling plans must be adhered to.
2.1 The method of sampling
In the specific case of control charts, our sampling plan is based on a random probabilistic method. A random draw is carried out according to 2 methods:
- Or a sample is taken at regular intervals of time. For example, it is determined to take 5 samples every hour, regardless of the amount produced over the last hour. This approach is used to really follow the evolution of the process.
- Or a levy is made based on the quantity produced. For example, we decide to take 5 samples every 1000 parts or all production batch. This approach is used if one of the purposes of the monitoring is to accept or not the production batch.
2.2 Calculation of sample size
Historically, the size of a levy is 5. This for a simple reason: at the time, the cards were hand-filled. To calculate an average, the number 5 is very convenient, because it is enough to multiply by 10 and divide by 2 to find it.
It will be held that:
- The number of samples collected in each subgroup is usually 4 or 5. This is for 2 reasons^{6} : The first, for the statement to be representative, the levy must be carried out in a reduced period of time, in which case the process may undergo changes that impacteraient the results. The second, less statistical, at the most we carry out a sampling, at the most it is expensive.
- The sample size must be the same at each sampling. Otherwise, the size of the larger sample must not be more than 2 times larger than the average, and the size of the smaller sample must not be more than 2 times smaller than the average size. For card sizes, the sample size shall not vary by more than 25% of the reference size.
Anyway, this will be the practice that would indicate the correct sample size. For example, in case you are subjected to very low variability (less than 1 standard deviation), the control cards lacking sensitivity on such a small dispersion, it is necessary to have a sample size large enough to detect it.
2.3 Sampling Frequency
To determine the frequency, specialists have defined charts. However, we prefer empiricism to determine it. It depends on the level of quality expected and the current level, the production rate, the frequency of the settings, the faults… And in a general way, the means that one wishes to implement.
However, the first approach is to use the Cavé rule. Mainly related to the frequency of the actual settings, this rule allows to give itself a first basis of reflection. The following formula^{4} :
For high-speed production *
For low-speed production or continuous production
- T: time taken into account for calculation
- N: Number of parts produced on the time unit T
- n: size of a sample to be taken
- p: average number of samples taken between 2 successive settings
- R: average number of settings on the T-unit
* Note that in order to define whether our production is high or low cadence, the condition is as follows:
- if N> p^{2} * n * R: High rate
- if N< p^{2} * n * R: Low Cadence
Other rules:
- A rule based on experience: ” the frequency of corrective actions on a process must be at least 4 times lower than the sampling frequency “^{5}.
- For productions with very low cadence, if the estimation of the frequency of sampling by the rule of Cavé leads us to control more than 50% of the production, then it will be necessary to carry out a control at 100%.
- We will often choose to start with a fairly high frequency, to reduce it as the improvements progress.
Some examples
Gas spring | Batch of pharmaceuticals | Plane | Chemical product | |
---|---|---|---|---|
Average cadence N | 10 000 per week | 190 per month | 30 per month | Undetermined |
Time taken into account T | 7 days | 20 days | 20 days | 30 days |
Average number of settings R | 14 per week | 6 per month | 4 per month | 8 per month |
p chosen arbitrarily | 4 | 4 | 4 | 5 |
Sampling size | 5 | 3 | 1 | 6 |
Cadence type | ${\mathrm{p}}^{2}\mathrm{x\; n\; x}\mathrm{R}=1120$< N High speed production | ${\mathrm{p}}^{2}\mathrm{x\; n}\mathrm{x\; R}=288$ > N Low speed production | ${\mathrm{p}}^{2}\mathrm{x\; n\; x}\mathrm{R}=48$ > N Low speed production | Continuous production |
Calculated frequency of sampling F | $\frac{1}{7}\sqrt{\frac{\mathrm{10000\; x}14}{5}}=23,9$ | $\mathrm{4\; x}\frac{6}{20}=1,2$ | $\mathrm{4\; x}\frac{3}{30}=0,8$ | $\mathrm{5\; x}\frac{8}{30}=1,33$ |
Frequency chosen | 1 sample per hour | 1 sample per day | - | 2 sample per day |
% of controlled production | 8,4% | 47,4% | We are at 80% calculated frequency, so 100% control | N/A |
2.4 Validate the collection
It is necessary to ensure that the data collected is reliable in which case the result will be less good. This is done by a Gage R & Ror aKappa Test.
2.5 Validating data distribution
The control cards are built on the assumption of normality of the data. To ensure that the results are correctly interpreted, it is necessary to ensure that this condition is:
- By performing a normal law adjustment test
- Below 4 samples per subgroup, validation of the normality hypothesis is important^{7}.
- Beyond 4 samples per sub-group, the normality hypothesis can be considered as ” valid “^{8}.
3. Choose the Control Charts
Control chart to measurements | Control chart to attribute | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Type of control | Medium to large series and for quantitative parameters | Count the number of rejects in each sample: Binary value Good / Not Good | Count the number of defects, the same product may have several defects. |
||||||||
Sample size | 1 and + | 1 | 2 to 25 | + 6 | Several hundred preferably, but a few dozen if we can not do better. | ||||||
Sample type | Variable | Constant | Variable | Constant | Variable | Constant | Variable | ||||
Sensitivity* | < 1 σ | 1 σ | > 1,5σ | 1 σ | < 1 σ | ||||||
Adjustment | Slow | Fast | Lent | ||||||||
Name of the control chart | Measure Cusum | Measure EWMA | I-mR | Xbar-R | Xbar-S | NP | P | C | U | Attribute EWMA | Attribute Cusum |
* Sensitivity is the level of performance of a Control Charts . The lower the sensitivity, the more efficient the Control Charts is. It will indeed be able to detect weaker drifts and therefore to alert as soon as a problem arises.
It is noted that there are other types of cards: Shainin Control Charts , map χ^{2}, map of Robert-Shiriaev, FMA card or also T^{2}Card (hotelling).
4. Build the Control Charts
They will be built in two phases according to the following criteria:
- Phase I : Statistically, we do not have enough data to be reliable in the calculations. are considered Phase I, when we have less than 15 data or datasets for the measurement cards. For attribute cards, standard X06-032-1 recommends using this process up to at least 300 data.
- Phase II: We have statistically sufficient data to ensure that it is approaching a normal law.
The consequence is on the interpretation:
- It is not possible to conclude actions on phase I maps.
- Only phase II Control Charts are reliable.
5. Interpreting a control chart
5.1 – Criteria for ” not acting “
The ideal ” form ” of a control chart must meet the following criteria:
- 2/3 points are in areas A and B.
- Few points are in areas E and F.
- The points are also distributed at the top and below the middle line.
- There are no points outside of the control limits.
The first meaning of a normal profile is that the process is under control. It is stable and is not disturbed by external causes. It tends to repeat itself day after day, and therefore it is predictable. The underlying characteristics can be determined by the calculation: the centre, the shape and the dispersion of the distribution.
5.2 – Criteria for “acting”
An “ abnormal ” profile indicates that the process is out of control. It is not predictable.
Although the criteria for identifying an anomalous profile are identical for all Control Charts, the interpretation differs depending on the type of card. We describe the criteria below, but we only indicate a general ” rule ” of action to be performed.
Below, we find the 8 criteria defined by the Western Electric^{9}. The order of importance in which they are put is, however, the one defined by Nelson^{10}, indicating the following principles:
- Rules 1 to 4 are applicable all the time. The risk of seeing an event when there is none is in the order of 1%.
- Rules 5 and 6 are to be applied if it is economically viable to have an alarm as soon as possible. But the likelihood of seeing an event when there is none is 2%.
- Rules 7 and 8 are interesting for setting up Control Charts . Test 7 always reacts when the variations come from 2 populations, whereas Test 8 reacts when the variations come from the same population.
Description* | Probabilities of occurrence** | Type of actions | Aspect of the control chart | |
---|---|---|---|---|
Criterion 1 | One point is outside the limits. | 2,7 for a thousand | Immediate adjustment action is necessary. | |
Criterion 2 | 9 points from the same side of the center line. | 3,91 for a thousand | An adjustment is necessary to refocus the process. | |
Criterion 3 | 6 consecutive or decreasing points. | 2,78 for a thousand | A drift of the process is undoubtedly due to a deteriorating element. You have to look for it to fix it. | |
Criterion 4 | 14 points in a row go from bottom to top. | 4,57 for a thousand | The situation is archaic, there is a clear lack of control. | |
Criterion 5 | Of the last 3 points, 2 are in zone E or F. | 3,06 for a thousand | A new sample is needed to see if the process returns between -2 and 2σ. If this is not the case, a setting will be necessary. | |
Criterion 6 | Of the last 5 points, 4 are in Zone C or D or beyond. | 5,53 for a thousand | There is a lot of dispersion in the process. A "future" failure is undoubtedly the origin. | |
Criterion 7 | 15 points sharp within the AB area. | 3,26 for a thousand | It would be interesting to do a t-test to compare the sample of 15 points with the previous 15 points to see if we do not have 2 distinct groups of data to say that progress actions have been successful. | |
Criterion 8 | 8 points of succession located on both sides of the central line but none in zone AB. | 0,1 for a thousand | There is a lot of dispersion. It becomes necessary to act and find the cause. |
In any case, when one of these situations is put forward, a sampling must be taken immediately. To validate the situation, the result must be in zone AB.
If not, we need to investigate the causes immediately.
* Other rules
According to standard NF X06-031
- A point is outside the limits.
- 9 points on the same side of the median line.
- 6 points of continuation ascending or descending.
- Of the last 3 points, 2 are located in zone E or F.
According to the AIAG
- 1 point out of bounds.
- 7 consecutive points are below or above the median line.
- 7 consecutive points are increasing or decreasing.
According to Juran
- 1 point out of bounds.
- Of the last 3 points, 2 are found in zone E or F
- Of the last 5 points, 4 are found in zone C or D or beyond.
- 6 points of continuation ascending or descending.
- 9 points on the same side of the median line.
- 8 waypoints located on both sides of the center line but none in Zone AB.
According to Westgard et
- A point is outside the limits.
- 2 points in a row in zone E or F.
- 4 points in a row in zone C or D or beyond.
- 10 points in a row on the same side of the median line.
- For 2 points in a row, 1 is in zone E and the other in F or beyond.
- 7 points in a row ascending or descending.
* * Calculating the probabilities of the appearance of NELSON’s rules
This is part of the conditions of use of the control cards, the data must follow a normal law. This condition is necessary because it conditions the fact that the rules for “acting” are valid or not. According to the normal law, the probability that our points are in a specific area is shown in the following diagram:
From where we deduce the probability of the appearance of each of the rules:
- Rule 1:2 * probability for off-limit = 2 * 0.135% = 2.7 per thousand
- Rule 2:2 * probability of a or B or C or D or E or F = 2 * (0.13591 + 0.34134 + 0.0214)^{9} = 3.91 per thousand
- Rule 3: Probability of counting that the order is important: (1 + 1)/6! = 2.78 per thousand
- Rule 4:398721962/14! = 4.57 per thousand
- Rule 5:2 * C (2; 3) * Probability of (E or high limit)^{2} * (1-probability of e or high limit) + 2 * (probability of E or high limit)^{3} = 2 * 3 * 0.02275^{2} * 0.97725 + 2 * 0.02275^{3} = 3.06 per thousand
- Rule 6:2 * C (4; 5) * Probability of (high limit or E or C)^{4} * (1-probability of (high limit or E or C)) + 2 * (high limit probability or E or C)^{5} = (2 * 5 * 0.158654 * 0.84135 + 2 * 0.015865^{5} = 5.53 per thousand
- Rule 7: Probability of (A or B)^{15} = 0.68268^{15} = 3.26 per thousand
- Rule 8 = probability of (high limit or E or C or D or F or low limit)^{8} = 0.31732^{8} = 0.1 per thousand
6. Driving with a control chart
The whole point of control cards is to provide managers with the right signals of action. They allow us to identify the ” true ” of ” false ” problems, whether our actions have really paid off…
There are two ways to misinterpret the control cards, which we detail below.
Over-react
In this case, a variation is attributed to a special cause when it belongs to common causes. The process produces outside the specifications defined by the customer’s voice but within the specifications defined by the voice of the process. In other words, the process is actually non-capable. The traditional consequence usually leads to adjustments or actions to return to the inside of the customer specifications.
We talk about over-reaction, simply because in reality, the problem is due to a process not capable or to customer specifications too tight as simple re-tuning or “short-term” actions will not be able to solve.
Under-react
Variation is attributed to a common cause when it comes from a special cause. This is exactly the opposite case of the previous. The sampling indicates that we are within the tolerances defined by the customer’s voice. However, we are outside the natural limits of the process. The process being capable, one does not realize that in reality it is drifting.
Generally, no action is taken, and one “waits” for the major failure or defect to act.
And so…
To properly manage processes, it is necessary to be able to separate the Customer’s voice And the voice of the Process. Customer tolerances only allow you to sort between what is good from what is bad. In other words, they make it possible to judge what has been done and therefore to look back.
On the other hand, the natural tolerances of the process allow to fly from day to day and react as soon as possible not to produce a defect. We’re in a proactive attitude.
7. Standardize and improve
The collection process has gone into morals. The process is optimized and generates fewer and fewer problems. We can reduce the monitoring and the number of samples.
Source
1 – R. W. Berger (2002) – The Certified Quality Engineer Handbook
2-Western Electric (2012) – Manual of statistical quality control
3 – W. E. Deming (2000) – Out of crisis
4 – F. Boulanger, G. Chéroute, V. Jamil (2006) – Statistical control of processes
5-M. Pillai (2005) – Applying statistical control of processes
6 – L. Alwan (2000) – Statistical process analysis
7-C. D. Montgomery (1996) – The ASTM Manuel of presentation of data and control chart analysis
8-E. G. Schilling, P. R. Nelson (1976) – The Effect of non-normality on th control limits X_{Bar} charts
9 – Western Electric (1956) – Statistical Quality Control Handbook
10-L. S. Nelson (1984)-Technical Aids
D. Griffiths, M. Bunder, C. Gulati, T. Onzawa (2010)-The probability of an Out of Control Signal from Nelson’s Supplementary Zig Zag test.
T. Mariani, P. Mariani (2012) – Juggling Paradigms, example of a Six Sigma project sequence from A to Z
W. Shewhart (1931) – Economic control of quality of manufacturing product
D. Howard (2004) – The Basics of Statistical Process Control & process behaviour Charting
S. Mantu (2004)-Operational Guide to quality
F. A. Meyer (2014) – Apply the ToC Lean Six Sigma in Services
H. Toda (1958) – Band-Score Control charts
P. R. Bertrand (2007) – An Introduction to quality control
Standard NF X06-031-0-control charts, part 0: general principles