The Wilcoxon test, created in 1945 by American physics researcher Frank Wilcoxn, also known as the Wilcoxon test signed, is not to be confused with the Wilcoxon – Mann Whitney test. Although the mechanism is similar, this test is dedicated to the analysis of matched data where the Wilcoxon – Mann Whitney test is used for independent samples.
This test is the non-parametric ” twin ” of the student test for matched data.
As accurate as the Student T-Test When the conditions of the parametric tests are met, it is much more precise when the conditions are not met. Generally speaking, it will be preferable to student’s T-Test.
We consider that we have a sample that consists of n pairs of observations. The test is to compare the differences between each measurement pair.
It is suitable for any type of variable (Quantitative, ordinal, binary…) As long as it is possible to determine whether one value is more important than another for each pair of observations.
Step 1: Assumptions
The Wilcoxon test relies on comparing the distribution of the stored data. In the case of a bilateral test, one asks:
H0: The 2 distributions are identical
H1 : The 2 distributions are different
One can also be unilateral. In this case, the assumptions are:
H1: The value of Sample 1 > or < (at choice) in Sample 2
Step 2: Set up the test variable
On principle, we will work on the basis of the discrepancies between data pair. So we form the data DI which is the absolute value of the difference between the 2 figures of each pair:
D1 = x1 – y1 ; D2 = x2 – y2 …
It is generally noted that the different X-values are named ” before ” and the Y-values ” after “. Indeed, pragmatically speaking, during the tests, we test the same element with 2 RecoveryS (principle of pairing), so we note X the values of the first tests and Y the values of the second tests.
Step 3: Calculate the sum of the rows T + and T-
3.1 Remove Zero deviations
When calculating didata, it is possible that some discrepancies between data pair are null. The usual solution is to delete the observations in question. Indeed, the true value of the number of data pair n is the number of observations for which the DI is different from 0.
The ranks will be calculated only on these individuals.
3.2 Identify the rank of each value
The rank of each of the values is given in relation to the set of the sample values. It is noted that the ” gross ” rank number is given according to the absolute value of the variable di.
The complexity lies in the case where we have ex-aequo. For this, the method of the middle ranks is used: they are given the average value of their ranks.
For example :
- If we have 2 equal values that take the 8th and 9th place, then we give them the rank 8.5.
- If we have 3 equal values, which take 10, 11 and 12th place, then we give them the rank of 11.
3.3 Calculate the sum of the rows T + and T-
For our test, we calculate T + and T- respectively. This corresponds to the sum of the ranks with a positive disparity and the sum of the ranks with a negative difference.
It is noted that the value of T- by T + can be inferred via the formula:
We note that at the most the value T + is high, the more we are lucky that the values ” before ” are higher than the values ” after “.
Step 4: Deduct the practical value
Case 1: n <= 15
The exact tables of Wilcoxon are used to deduce the practical value.
Case 2: N > 15
The distribution of T + is approximates by a normal law. The average and variance are calculated using the following formulas:
In case we have ex-aequo, we need to adjust the Variance. The formula is as follows:
Step 5: Critical value
Case 1: N
The practical value in this case was chosen from the Wilcoxon table. We compare it to the value of the risk α that we have chosen, this according to the meaning of the test :
α / 2
1 – α
Case 2 : n > 15
In this case, with regard to the convergence of the distribution to a normal distribution, the critical value calculated via the normal law is used. It is calculated according to the sense of the test and through the function NORM.S.INV:
α/2 or 1-α/2
Step 6: Calculating the P-Value
The p-Value is used to evaluate the risk level of the test. Since the ranking method has the “normalization” of the data, the p-value is obtained via the formula:
P-Value = 1 – NORMSDIST (ABS (practical value))
Step 7: Interpretation
|Test direction||Result||Statistical conclusion||Practical conclusion|
|Bilateral||Practical value < α / 2 or |
Practical value > 1 - α / 2
|We reject H0||The 2 distributions are different|
|Unilateral right||Practical value > Critical value||We reject H0||"Before" data has larger values than "after" data|
|Unilateral left||Practical value < Critical value||We reject H0||"Before" data have smaller values than "after" data|
|Result||Statistical conclusion||Practical conclusion|
|p-value > α||We retain H0||Our data series are identical or close with a risk of being wrong with p-value%|
|p-Value < α||We reject H0||Our data series are statistically different with a risk of being wrong with p-value%|
It should be noted that in general, for samples below 15, p-value will often be greater than α. This indicates that the number of data pairs is not sufficient to be statistically accurate.
P. Caperra, B. Van Cutsem (1988)-Methods and models in non-parametric statistics
D. J. Sheskin (2004)-Handbook of parametric and non-metric statistical procedure
S. Jackson (2002) – Statistics Plain and simple