[Total: 0    Average: 0/5]
The Friedman test is a generalization of the Wilcoxon test for more than 2 samples.

Introduction

The Friedman test, a non-parametric test, is a generalization of the test of Wilcoxon For more than two samples. It allows to test K group of paired data. For example, one wishes to know whether the notes given to pupils by several professors are coherent, and validate the quality of the scoring mode. The pairing is here the students who are the same for the professors.

The operation of the test is close to that of Kruskal Wallis. It is an alternative to the test of the Anova Repeatedly for non-normal dependent samples.

The principle

First of all, we call:

  • The n ” blocks “: They represent the pairing variable of the test.
  • The K ” samples “: they represent the elements that we want to test.

Example 1: We want to test the consistency of the teachers ‘ ratings. The number of blocks will be equal to the number of students (common for all professors), and the samples will be the number of teachers (they are the ones we want to test).

Example 2: We want to test the tyre life on different circuits. The number of blocks will be equal to the number of different circuits, and the number of samples will be the number of tyres we wish to test.

The idea of the test is to highlight the differences that can be between the different samples. Friedman’s statistic is based on a ratio of variance between the samples of the same block and the variance between the blocks. The whole contribution is to measure this gap and to conclude that the larger the gap, the more the samples are different.

All the specificity with the other non-parametric tests is based on the fact that we work on the ranks within each block and not on the entire data.

Step 1: Assumptions

In terms of the number of samples, greater than 2, a bilateral test can only be done. The hypothesis couple is always:

H0: The samples are not different.

H1: The samples are different.

Step 2. Calculate the total amount of the rows

The calculation table consists of K lines (the number of samples) and n columns (the number of blocks). It comes as this :

Block

Sum of Ranks

Sample

n1

n2

n3

K1

K2

K3

The rank of each of the values is given in relation to its position inside a block. The complexity lies in the case where we have ex-aequo. For this, the method of the middle ranks is used: they are given the average value of their ranks.

For example :

  • If we have 2 equal values that take the 8th and 9th place , then we give them the rank 8.5.
  • If we have 3 equal values, which take 10, 11 and 12th place, then we give them the rank of 11.

Finally, for each of the samples, we add the ranks they have obtained within each of the blocks.

Step 3. Calculating the practical value of Friedman

Friedman’s practical value is similar to that of Krukal Wallis. It notes Fr is worth:

  • n : number of blocks
  • K: Number of samples
  • SRk : sum of group ranks
In the event that we have had ex-aequo, we take into account a correction factor which is calculated as follows:

Corrected practical value = Fr/correction factor

  • n : number of blocks
  • K: Number of Samples
  • Tg : The number of observations associated with the value in question. If for example we have 2 values of 6, then Tg will be 2.

Step 5. Calculating the critical value

Case 1: nb of sample K <= 6

In this case, we use Friedman’s exact tables. For a given risk (1 or 5%), the associated value for column K and row n is selected in the table.

Case 2: nb of sample K > 6

The practical value Fr follows a law of χ2 for a number of dof equal to K – 1. With Excel, we use the CHIINV (α; K-1).

Step 6: Calculating the p-value

Also the p-Value is calculated via the law of χ2. With Excel, it is calculated in the following way:

ChiDist (practical value; K-1)

Step 7: Interpretation

ResultStatistical conclusionPractical conclusion
Practical value Fr < Critical valueWe retain H0There is no difference between the samples.
Practical value Fr > Critical valueWe reject H0There is a difference between the samples.
ResultStatistical conclusionPractical conclusion
p-value > αWe retain H0There is no difference between the samples with a risk of being wrong of p-value%
p-Value ≤ αWe reject H0There is a difference between the samples with a risk of being wrong of p-value%

Source

D. Chessel, A. B. Dufour (2003) – Practice of elementary tests

G. W. Corder, D. I. Foreman (2009) – Non parametric statistics for non statisticians

P. Capéraà, B. Van Cutsem (1988) – Methods and models in non-parametric statistics

Share This