The 6 Sigma has many performance indicators to judge the ability of the company to generate quality.
Introduction
The 6 Sigma is a methodology based on the amount of defect our processes generate, and uses statistical tools to calculate our performance level.
Defects Per Unit: DPU
The average number of faults (literally defects perunit) represents the number of mean defects per units produced. We talk about flaws, this creates several details:
- The same unit can have several faults.
- The defects that resulted in a scrap or Recovery are counted.
It is calculated in the following way:
DPU = total number of faults + Recovery /Total number of units
Defects Per Opportunities DPO
The DPO is the logical continuation of the DPU in the sense that it measures the number of faults in relation to the total number of opportunities. It is calculated in the following way:
DPO = DPU/Number of opportunities per unit
To turn this ratio into probability of having a Part with a defect, we will go through the law of fish, either:
What is an opportunity?
An opportunity represents all the possible faults identified by the quality department. For example, for a finishing Position in painting, the quality department identified 4 possibilities of defects:
- A bad color
- A color left in a non homogeneous way
- of the Asperities
- Presence of dust
Thus, for a calculation of DPO on this Position, our number of opportunities per unit is 4.
The DPMO
The DPMO is the flagship indicator of the 6 sigma, since it is from its calculation that the level of Sigma is deduced. Literally, it translates into the number of flaws per million opportunities. It is calculated in the following way:
DPMO = DPO * 1 000 000
The PPM
Translated by part per Million, it represents a failure rate in the same way as the DPMO. The difference lies in the fact that we do not count against a number of opportunities of defects and recoveries, but only parts aillant one or more defects (scrap or not) per million units.
PPM = (number of parts with default/total number of units) * 1 000 000
Difference between DPMO and PPM
DPMO: We consider the opportunities of default: a product can have one or more opportunities of defects, that it is spawned a scrap, a Recovery or just a dissatisfaction.
PPM: A product is considered to have only a defect opportunity, whether it has resulted in a scrap, Recovery or just dissatisfaction.
The cumulative aggregate yield
The cumulative aggregate yield is a measure of the likelihood of a sequence of steps to deliver a quality product at the first move. It is an indicator that highlights overall performance by taking into account all quality components (waste and recoveries)^{1}.
The Phantom Enterprise
This measure highlights ” The Phantom Enterprise When measuring productivity performance, we count the scraps, but more rarely the repeats. They do not add value and are often not visible because the product is ultimately judged good. However, the repeats generate variability and cost^{2}.
Example:
We are a sofa manufacturing company. We had to create a specific Position of Recovery because the finishing of our products is important.
If all goes well, on a day of work we are able to produce 50 sofas. Our process consists of a chassis manufacturing Position , a cushion manufacturing station and Position an assembly/finishing position, and a Recovery Workstation . Position
On a standard day, here are the numbers we have:
Using the computation of performance via traditional logic, we get 96% (indeed, we produced 48 good sofas on the lens at 50). Very good performance, so we do not worry. Using the cumulative overall yield, we get only 43.21%. This becomes disturbing because less than half of our production goes without concern the different stages.
First Pass Yield
The way to calculate the quality performance of a process is based on the so-called first pass yield (first pass yield -FPY translation). This is a measure to assess the initial effectiveness of a multi-step production process. It allows to determine the percentage of good of the first shot that this process provides. The formula is:
FPY = 1-% scrap/Recovery
With:
- FPY: First pass Yield
- % scrap/recovery = (nb of scrap + NB of Recovery) Total/Nb of products
Example:
We have a three-step manufacturing process. 1000 products enters the first process. After measuring the number of faults, we get the following table:
Cumulative overall yield
The cumulative aggregate yield (rolled Throughput yield- RTY) is the accumulation of FPY. It is calculated in the following way:
RTY_{N} = FPY_{0} * FPY_{1} * FPY_{3}… * FPY_{N}
It should be noted that if we want to generalize our model and calculate a probability of cumulative yield, we will use fish law. The performance for a step is therefore:
R_{is} = e^{(-DPU)}
And the_{estimated} COY = R_{Est1 }* R_{est2 * }R_{est3…}
Sample calculation
We are a pharmaceutical manufacturing company. We have an ” internal ” value chain of 3 consecutive processes:
We wish to manufacture 1000 quality syrup. We know:
- Each of our steps generates 10% scrap and repeats.
- We can generate 3 types of faults at the filling level (non-conforming torque, cap generating plastic shavings, or quantity of liquid not conforming), 3 at the level of labelling (missing, bent, bad overprinting) and 7 types of Defects in the packaging (damaged label, damage case, unfolded leaflet…).
Counting the exact number of faults and times (additional quality control, products delivered online for various reasons, ejections by mistake…), we obtain the following table and results:
With regard to the number of scraps we generate as we go through our process, to have 729 good parts, we need 1000 as input.
We realize that we generate many times. The main cause is that in reality we have little confidence in our processes, we have a whole series of additional controls made by the operator. It collects products on line and carries out its control (tightening torque…) In addition to all the automated controls carried out by the equipments. But we also have a lot of online remittance due to the fact that a vial was not labeled due to an error in the equipment…
Of course, if we detail the costs, it does not come back to the same. Indeed, these occasions cost us in raw material, in control equipment, in labor, in time…
If we do the calculation and the comparison, here is the cost table:
Process |
Cost of a Recovery in € |
Cost of a scrap in € |
Scrap number |
Number of Recovery recoveries |
Cost of traditional non-quality |
Cost of non-quality in 6 Sigma |
1 |
0,3 |
0,5 |
100 |
107 |
50 |
82 |
2 |
0,3 |
0,8 |
90 |
108 |
72 |
104 |
3 |
0,3 |
1,2 |
81 |
91 |
97 |
125 |
Total |
219 |
311 |
In total we get a non-quality cost of 311, the where traditionally, we would be tempted to count only 219. A difference of about 30%.
Source
1 – L. Webber, M. Wallace (2007) – Quality Control for Dummies
2-F. W. Breyfogle (2008) – Improvement project execution
M. Pillai (2013) – Six Sigma How to apply it