[Total: 0    Average: 0/5]
The student test allows to identify differences on average or proportional parameters.

Introduction

The student test is probably the most popular test. It was published for the first time in the Biometrika magazine by William Sealy Gosset in 1908. While employed at the Guinness brewery in Dublin, he was in charge of working on the quality and costs of various varieties of barley and houblon1. Study that allowed him to conclude that at the most the yield is great, at least the quality of the resulting barley is low.

After studying statistics, he met in 1905 with another great mathematician of the time, Karl Pearson, and worked to resolve his questions about the probable error of the average. In 1907 he was appointed head of the Guinness Experimental brewery and used the student table he had defined to determine the best quality of barley. He compared the actual results with the estimated results and realized that at the most the proportion is large in a weak sample, at the most the normal law deviates from the reality.

In the context of this study, student then put forward that if data follow a normal law, there are 2 possible errors of estimating the average of a population from the sample, leading it to create its law:

  • A random sampling error, in other words our data are not random.
  • The sample is not large enough to accurately determine the distribution law.

Guinness authorized to publish his work under his pseudonym student.

Initially used only by some specialists and the staff of the Guinness laboratories, it will only be a few years later, that Ronald Aylmer Fisher, the famous statistician, who recovered the work and deployed the method.

The law is more flat than normal law:
  • V: The number of degrees of freedom
  • Γ: Gamma function

The principle

The student test is based on the ratio of the difference between the 2 values we want to test and the difference in the variances of the 2 values. What we need to understand is that, in the end, we have two reasons why the report will be important and that we’ll conclude that the two samples are different:

  • Either the difference between the two samples is very large
  • The difference in variance between the two samples is very low, reinforcing the idea that the samples stand out.

Step 1: Assumptions

The student’s T-Test can be used for two purposes that we detail below.

1.1: Comparing a sample with a target

In a comparison with a target, the T-Test can be used to compare either an average (one Sample T-Test) or a proportion. The comparison for an average applies when we have quantitative data whereas the comparison of a proportion will be used when our data are qualitative in 2 modalities.

Example

We have a supplier batch where we detected 13 faults on 1000 parts the contract being 1%. Can we conclude that the difference is significant or not and so accept the batch or not?

Hypotheses are for a bilateral test (but one can very well do a unilateral left or right test):

  • For a comparison of mean: H0: μ1 = μ0 and H1: μ1 ≠ μ0
  • For a proportional comparison: H0: P1 = P0 and H1: p1 ≠ p0

1.2: Compare 2 samples between them

In the same way, two samples can be compared with each other in the same conditions, i.e. either on the average parameter (Two sample T-Test) or on the proportion parameter.

Example

We want to improve the performance of a product. A test is carried out with the old and the new product and we want to compare the results. Can we conclude with a real improvement or simply a variability?

Hypotheses are for a bilateral test (but one can very well do a unilateral left or right test):

  • For a comparison of mean: H0: μ1 = μ2 and H1: μ1 ≠ μ2
  • For a proportional comparison: H0: P1 = p2 and H1: p1 ≠ p2

Step 2: Calculate the practical value

Compare an average with a target

μ: Average sample Obseré

μ0 : Theoretical average that serves as a comparison

σ: standard deviation of the sample

N: number of individual in the sample

Compare a proportion with a target

P0 : target Proportion

P : Observed proportion of our sample

N: number of individual in the sample

Compare the average of two samples

μ1 and μ2 : observed mean of samples 1 and 2

n1 and N2 : number of individuals in samples 1 and 2

σ: average standard deviation of samples =

Compare the proportion of two samples

P1 and P2 : Proportion of samples 1 and 2

n1 and N2 : sample size

P: proportion of grouping =

Step 3: Calculating the critical value

The distribution of student is very close to the distribution of the normal law. They tend to be equal for large samples (+ 30 individuals), but for small samples (-30), the distribution of student is more precise.

In practice for samples of more than 30 individuals, one uses the student’s law or the normal law. For this, we will use the function Excel T.INV. If you want to use the law of Student and NORM.S.INV if you want to use the normal law.

The level of risk depends on the meaning of the test:

  • Bilateral: 1-α/2
  • Left unilateral: α
  • Right unilateral: 1-α

The number of degrees of freedom of:

  • For a comparison of a mean or a proportion with a target: dof = n-1
  • For the comparison of two mean or two proportions between it: dof = n1 + N2-2
  • For comparing matched data: dof = n – 1

Step 4: Calculate the P-Value

For the P-Value, the approximation is used by the law of student. We find:

  • For a bilateral test: law. Student (I practical value i; n-1; 2)
  • For a unilateral left or right test: law. Student (I practical value i; n-1; 1)

Step 5: Interpretation

Test directionResultStatistical conclusionPractical conclusion
BilateralPractical value > + Critical value

or

Practical value < - Critical value
We reject H0The 2 samples are different
Unilateral rightPractical value > Critical value 1 - αWe reject H0Sample 1 is statistically larger than 2 at the given level of risk α.
Unilateral leftPractical value < Critical value αWe reject H0Sample 1 is statistically smaller than 2 at the given level of risk α.
ResultStatistical conclusionPractical conclusion
p-value > αWe retain H0Our data series are identical or similar with a risk of being wrong with p-value%
p-Value < αWe reject H0Our data series are statistically different with a risk of being wrong with p-value%

Source

1 – W. GOSSET (1908)-The probable error of the mean

Y. Dodge (2007) – Statistics, Encyclopedic dictionary

E. S. Pearson, L. McMullen (1970) – William Sealy Gosst, 1876, 1937. Studies in the history of statistics and probability

J. Fisher-Box (1987) – Guinness, Gosset, Fisher and small samples

G. Mayo (2008) – Understanding and conducting tests with R

M. R. G. O’Gorman, A. D. Donnenberg (2008) – Handbook of Human Immunology

Standard NF X 06-054

Standard NF X06-069

Standard NF X06-070

Share This