The Block Anova, also called repeated measures or randomized blocks in English, is a special case of the Anova that can be applied to study:
More than 2 matched samples, where the test can only be 2
Samples of which a number of measurements have been made several times.
In the same way as the Anova, we compare the average of the different groups of samples. The statement of the subject is identified as:
- n: The number of blocks, either the number of k measures, or the number of k groups of data matched. It is relative to the conditions we want to test.
- K: The number of measures. It is representative of the element we want to test.
- Case of matched data: one wishes to study the efficiency of an additive to reduce the consumption of cars. We carry out tests without and with additive and compare on 5 different car brands. We have thus n = 2 groups (before-after) with each K = 5 cars.
- Case of repeated measurements: we want to study the endurance of car tires. For this reason, K = 5 different tyre marks and the test on each tyre is carried out n = 10 times, consisting of rolling a car until the wear indicator is reached.
Since we are in a particular case of the Anova, the variance is broken down so as to highlight the one due to the K measurements. It is a Anova To two factors with K measures as the first factor and n blocks, the second factor. The principle is based on comparing the average of a multitude of groups with the general average. We calculate:
- Intergroup Variance K, SST: difference between the average of each K group and the general average.
- Variance Intragroup N, SSB: The difference between the value of each block and the general average.
- residual Variance, SSE: difference between the total variance and the sum of the intergroup variances (SST + SSB).
- Total Variance, TSS: difference between the value of each individual and the general average.
Copie de Tableau ANOVA en blocs
|Source of variance||Sum of Squares||DOF||Mean Squares||F||p-value|
|Intergroup (K measures)||SST||K - 1||MST||p-value|
|Intragroup (n blocks)||SSB||n - 1||MSB|
|Error||SSE||(n - 1) * (K - 1)||MSE|
|Total||TSS||n * K - 1|
Step 1: Assumptions
The purpose of the test is strictly the same as for the one-way Anova. The initial hypothesis is:
H0 : μ1 = μ2 = … = μk
H 1 : At least 2 averages are different
Step 2: Calculate the sum of squares of the treatment-SST
The difference between the general average and the average of each of the K groups is calculated. In other words, at the most this value will be greater than the average of each of the K groups differ from each other.
- n: Number of measurement blocks
- μkgroup : Average of each of the groups of K measures
- General μ: average of all measures
Step 3: Calculate the sum of the squares of the blocks – SSB
The Variance explained by the blocks is calculated. Thus, at the most this one will be large, the more the variance brought by the K measures will be low and therefore at least our samples will be different.
- K: Number of measures per block
- μnblocs : Average of each of the blocks
- General μ: Average of all measures
Step 4: Calculate Total square sum – TSS
The sum of the total squares represents the difference of each of our measures vis-a-vis the general average.
Step 5: Deduct the sum of squares from residues – SSE
The residuals are inferred by calculating the difference between the total variance and the group variances. In other words, it is the variability that is not known explained by the variance of the n blocks or the K measurements.
SSE = TSS – SST – SSB
Step 6: Calculate the number of degrees of freedom
The degrees of freedom are representative of the level of knowledge we can draw from our test. In our case, we get the following elements:
- Nb of dof Intra: dofSST = K-1
- Number of dof Intergroup: dofSSB = N – 1
- Residual dof :dofSSE = (n – 1) * (K – 1)
- Total dof : dof TSS = n * K-1
Step 7: Calculate the average squares
The average squares represent the “weight” that can be given to the different values of variances. They are calculated by making the connection with the dof.
- MST = SST / dofSST
- MSB = SSB/ dofSSB
Step 8: Practical value
The test statistic represents the ratio between the variance explained by our K measurements and the residual variance. It is calculated in the following way:
F = MST/MSB
Step 9: Calculating the critical value
The test statistic follows a Fisher’s law at (K-1, (n-1) * (K-1)) degrees of freedom. You choose the desired risk value, typically 5%, and then determine the critical value using Excel using the following formula:
F. INV (1- α; dofSST ; dofSSE)
Step 10: Calculate the P-Value
To validate the significance of the test, we calculate the p-value which also follows a Fisher’s law. For this Excel one uses the following formula:
p-value = F.DIST (practical value; dofSST ; dofSSE )
Step 11: Interpretation
|Test direction||Result||Statistical conclusion||Practical conclusion|
|Bilateral||Practical value ≥ Critical value||We reject H0||The samples have averages that differ at the given level of risk α.|
|Practical value < Critical value||We retain H0||The samples have averages that do not differ at the given level of risk α.|
|Result||Statistical conclusion||Practical conclusion|
|p-value > α||We retain H0||Our data series are identical or close with a risk of being wrong with p-value%|
|p-Value < α||We reject H0||Our data series are statistically different with a risk of being wrong with p-value%|
D. R. Shupe (1995) – Inferential statistics: An Introduction to the analysis of variance
D. C. Howell (2001)-Statistical Methods for psychology
G. W. Cobb (1998) – Introduction to design and analysis of experiments
R. Rafiq (2013) – population comparison, parametric tests