2.Matched pairs inference procedures
Sometimes we want to compare treatments or conditions at the individual level. These situations produce two samples that are not independent – they are related to each other. The subjects of one sample are identical to, or matched (paired) with, the subjects of the other sample.
Example: Pre-test and post-test studies look at data collected on the same subjects before and after some “treatment” is performed.
Example: Twin studies often try to sort out the influence of genetic factors by comparing a variable between sets of twins.
Example: Using people matched for age, sex, and education in social studies helps to cancel out the effects of these potentially relevant variables.
Except for pre/post studies, subjects should be randomized – assigned to the samples at random (within each pair), in observational studies.
3.For data from a matched pair design, we use the observed differences Xdifference = (X1 − X2) to test the difference in the two population means.
The hypotheses can then be expressed as H0: µdifference= 0 ; Ha: µdifference>0 (or <0, or ≠0)
Conceptually, this is not different from our earlier tests for the mean of one population. There is just one mean, µdifference, to test.
4.Sweetening colas (revisited)
The sweetness loss due to storage was evaluated by 10 professional tasters (comparing the sweetness before and after storage).
Taster Change in Sweetness
We wanted to test if storage results in a loss of sweetness, thus:
H0: μchange = 0 versus Ha: μchange > 0.
Although we did not mention it explicitly before, this is a pre-/post-test design and the variable is the difference: Sweetnessafter – Sweetnessbefore.
A matched pairs test of significance is therefore just like a one-sample test.
5.Does lack of caffeine increase depression?
Individuals diagnosed as caffeine-dependent were deprived of caffeine-rich foods and assigned pills for 10 days. Sometimes, the pills contained caffeine and other times they contained a placebo. A depression score was determined separately for the caffeine pills (as a whole) and for the placebo pills.
There are 2 data points for each subject, but we only look at the difference.
We calculate that = 7.36; sdiff = 6.92, df = 10.
We test H0: μdifference = 0, Ha: μdifference > 0,
using α = 0.05. Why is a one-sided test ok?
From the t-distribution: P-value = .0027,
which is quite small, in fact smaller than α.
Depression is greater with the placebo than with the caffeine pills, on average.
6.The weight of calves
It is clear that the weight of a calf increases over time. However, it may be a surprise to learn that the increase is not immediate. This can be seen by analyzing the calf weight data that we have been studying over the past few weeks.
Look at the calf data in Statcrunch, to see how much weight each of the calves gain or loose in Week 1, it is clear that we must take the difference between the weight at week 0 – weight at week 1, and the analysis needs to be done on the differences (these can be stored in your Statcrunch spreadsheet).
Now we conduct an identical analysis to the one-sample methods but on the differences, we obtain the table below (these numbers were obtained in statcrunch). Observe to obtain the t-transform we need the mean difference and the standard error (which is a measure of spread of the mean difference), the t-transform = (mean difference - 0)/s.e.
We see that from Week 1 to Week 2 there is a drop in weight, in fact the t-transform in Week 2 is much greater than the t-transform in Week 1, so the p-value in Week 2 is a lot smaller than the p-value in Week 1. In Week 3, we do not reject any hypothesis, this suggests that the weight is back to birth weight. And from Week 4 onwards we see a gain in weight.
8.CI for the mean weight difference
Below we construct 95% CI (using t-dist 47 df, 2.5%) for the mean difference. The mean difference is likely to be in this interval.
Since zero does no lie in the the Intervals for Week 1, Week 2 and Week 4, this means we are rejecting the null on the two sided test (at the 5% level), and the p-value is less than 2.5%.
Using the information above we can 95% construct intervals for the weight of a randomly selected healthy calf. This will be much wider than the intervals above, and will not decrease with sample size. Such intervals can help us determine whether a calf is healthy or not. How to construct such intervals is beyond this course.
9.The purpose of most studies is to compare the effects of different treatments or conditions. In fact, the most effective studies are the ones that make direct comparisons from data within the studies.
Often the subjects are observed separately under the different conditions, resulting in samples that are independent. That is, the subjects of each sample are obtained and observed separately from, and without any regard to, the subjects of the other samples.
Example: Students are randomly assigned to one of two classes. Those in the first class are given iPads that they bring to class to be interactively involved. The other class has a more traditional format.
Example: An economist obtains labor data from subjects in France and from subjects in the U.S. The samples are independently obtained.
As in the matched pairs design, subjects should be randomized – assigned to the samples at random, when the study is observational.
Independent samples inference
10.Independent sample scenarios
Sample 1 is randomly obtained from Population 1 and, by an independent (separate/unrelated) means, Sample 2 is randomly obtained from Population 2.
Subjects in the samples are obtained and observed separately, and without any relationship to subjects in the other sample(s).
Sample 1 is randomly obtained and its subjects are given Treatment 1, and independently Sample 2 is randomly obtained from the same Population and its subjects are given Treatment 2.
Independence is not the same as “different”.
11.Difference of two sample means
We are interested in two populations/conditions with respective parameters (μ1, σ1) and (μ2, σ2).
We obtain two independent simple random samples with respective statistics ( , s1) and ( , s2).
We use and to estimate the unknown μ1 and μ2.
We use s1 and s2 to estimate the unknown σ1 and σ2.
Since we wish to compare the means, we use to estimate the difference in means μ1 − μ2.
After the original coffee sales study, the marketing firm obtained independent random samples: 34 “West Coast” shops and 29 “East Coast” shops. The firm observed the average number of customers (per day) over 2 week periods. The two sample means were = 295 and = 319.
Thus, they estimate that East Coast shops have an average of 319 − 295 = 24 customers more per day than West Coast shops do.
12.Distribution of the difference of means
In order to do statistical inference, we must know a few things about the sampling distribution of our statistic.
The sampling distribution of has standard deviation
(Mathematically, the variance of the difference is the sum of
the variances of the two sample means.)
This is estimated by the standard error (SE)
For sufficiently large samples, the distribution is approximately normal.
Then the two-sample t statistic is
This statistic has an approximate t-distribution on which we will base our inferences. But the degrees of freedom is complicated …
13.Statisticians have a formula for estimating the proper degrees of freedom (called the unpooled df). Most statistical software will do this and you don’t need to learn it.
df > smaller of (n1−1,n2−1), which can be used instead of the unpooled df. This is called the conservative degrees of freedom. It is useful for doing HW problems, but for practical problems you should use statistical software which will use the more accurate unpooled df.
Two sample degrees of freedom
14.The strange standard error
The standard error for the two sample test looks quite crazy. But it is quite logical. We recall that in the one sample test the standard error decreased as the sample size increased (this is because the sample standard deviation stayed about the same) but n grew, which meant that the standard error decreased.
In the two sample case, now there are two sample sizes, both sample sizes must increase in order that the standard error decreases. Consider the following examples:
If the size of one sample stays the same, but the other decreases, the standard error does not decrease much. This is because the estimator of one of the means will not improve – consider the case that there is only one person in a group.
If the standard deviations of both populations are about the same, and overall the number of subjects in a study is fixed, then using equal number of subjects in each group leads to the smallest standard error.
15.The null hypothesis is that both population means μ1 and μ2 are equal, thus their difference is equal to zero.
H0: μ1 = μ2 H0: μ1 − μ2 = 0 .
Either a one-sided or a two-sided alternative hypothesis can be tested.
Using the value (μ1 − μ2) = 0 given in H0, the test statistic becomes
To find the P-value, we look up the appropriate probability of the t-distribution using either the unpooled df or, more conservatively, df = smaller of (n1 − 1, n2 − 1).
Two-sample t significance test
16.Does smoking damage the lungs of children exposed to parental smoking?
Forced vital capacity (FVC) is the volume (in milliliters) of air that an individual can exhale in 6 seconds.
We want to know whether parental smoking decreases children’s lung capacity as measured by the FVC test.
Is the mean FVC lower in the population of children exposed to parental smoking than it is in children not exposed?
FVC was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking.
17.The observed “effect” is
a substantial reduction in FVC. But is it “significant”?
To answer this, we calculate the t-statistic:
Even if we use df = smaller of (nsmoke−1, nno−1) = 29 we find that a t-statistic > 3.659 gives a P-value < 0.0005 (for a one sided test). So our t = −3.922 is very significant. And so we reject H0.
H0: μsmoke = μno ↔ H0: (μsmoke − μno) = 0
Ha: μsmoke < μno ↔ Ha: (μsmoke − μno) < 0 (one-sided)
Lung capacity is significantly impaired in children of smoking parents.
18.The influence of Betaine on weight
We want to investigate the effect that Betaine may have on the weight of calves. In order to determine its influence, a comparison needs to be made with a control group (calves not given Betaine). To statistically test whether Betaine has an influence, we draw two random samples, these form the two groups. In one group of calves Betaine is given and their weight is recorded over 8 weeks in the another group only milk is given and their weights recorded.
Our data set only contains 11 calves in each group, the sample sizes are both very small, therefore if there is a difference between the Betaine group and the control group, the difference must be large for to be able to detect it (reject the null), this is because the standard error for small samples will be quite large.
If you want to replicate our results, recall that TRT = B corresponds to the group given Betaine and TRT = C the calves given only milk.
19. We will test that the mean difference in weights between those given Betaine
those not given Betaine is zero against the alternative that the mean difference is
different. We summarise the results below.
We observe that the t-transform is small and the p-value is large, thus we cannot
reject the null at the 10% level. This could be because there is no difference, or that
there is a difference but there is too much variability in the data for us to see a
significant difference with such small sample sizes.
20.Remember: Significance means the evidence of the data is sufficient to reject the null hypothesis (at our stated level α). Only data, and the statistics we calculate from the data, can be statistically “significant”.
We can say that the sample means are “significantly different” or that the observed effect is “significant”. But the conclusion about the population means is simply “they are different.”
The observed effect of −12.7 on FVC of smoking parents is significant so we conclude that the true effect μsmoke − μno is less than zero.
Having made this conclusion, or even if we have not, we can always estimate the difference using a confidence interval.
21.Recall that we have two independent samples and we use the difference between the sample averages ( ) to estimate (μ1 − μ2).
This estimate has standard error
The margin of error for a confidence interval of μ1 − μ2 is
We find t* in the line of Table D for the unpooled df (or for the smaller of (n1−1, n2−1)) and in the column for confidence level C.
The confidence interval is then computed as
The interpretation of “confidence” is the same as before: it is the proportion of possible samples for which the method leads to a true statement about the parameters.
Two-sample t confidence interval
22.The observed “effect” is
Using df = smaller of (nsmoke−1, nno−1) = 29 we find t* = 2.756.
The margin of error is
And the 99% confidence interval is
Obtain a 99% confidence interval for the smoking damage done to lungs of children exposed to parental smoking, as measured by forced vital capacity (FVC).
We conclude that the FVC of lung capacity is diminished on average by a value between 3.78 and 21.62 in children of smoking parents, with 99% confidence.
23.Can directed reading activities in the classroom help improve reading ability? A class of 21 third-graders participates in these activities for 8 weeks while a control classroom of 23 third-graders follows the same curriculum without the activities. After 8 weeks, all children take a reading test (scores below).
95% confidence interval for (μ1 − µ2), with df = 20 conservatively t* = 2.086.
With 95% confidence, (µ1 − µ2) falls within 9.96 ± 8.99 or 0.97 to 18.95.
24.95% confidence interval for the reading ability study using the more precise degrees of freedom.
If you round the df, round down, in this case to 37. So t* = 2.025 (interpolating the table).
Note that this method gives a smaller margin of error, so it is to our advantage to use the more precise degrees of freedom.
From StatCrunch: [Stat-T Statistics-Two Sample, uncheck “pool variances”]
25.Summary for testing μ1 = μ2 with independent samples
The hypotheses are identified before collecting/observing data.
To test the null hypothesis H0: μ1 = μ2, use
The P-value is obtained from the t-distribution (or t-table) with the unpooled degrees of freedom (computed).
For a one-sided test with Ha: μ1 < μ2, P-value = area to left of t.
For a one-sided test with Ha: μ1 > μ2, P-value = area to right of t.
For a two-sided test with Ha: μ1 μ2, P-value = smaller of the above.
If P-value < α then H0 is rejected and Ha is accepted (one sided – if two-sided then P-value < α/2). Otherwise, H0 is not rejected even if the evidence is on the same side as the alternative Ha.
Report the P-value as well as your conclusion.
You must decide what α you will use before the study or else it is meaningless.
26.Summary for estimating μ1 − μ2 with independent samples
The single value estimate is
This has standard error
The margin of error for an interval with confidence level C is
where t* is from the t-distribution using the unpooled degrees of freedom.
The confidence interval is then
You must decide what C you will use before the study or else it is meaningless.
For both hypothesis tests and confidence intervals, the key is to use the correct standard error and degrees of freedom for the problem (what is being estimated and how the data are obtained).
27.Coffee Shop Customers: West Coast vs. East Coast
Side-by-side boxplots help us compare
the two samples visually.
The West Coast values are generally
lower and have slightly more spread
than the East Coast values.
The marketing firm obtained two independent random samples: 34 “West Coast” coffee shops and 29 “East Coast” coffee shops. For each shop, the firm observed the average number of customers (per day) over 2 week periods.
Here μWC is the mean, for all West Coast coffee shops, of the variable XWC = “daily average number of customers”. Likewise, μEC is the corresponding mean for all East Coast coffee shops.
28.Coffee Shop Customers (cont.)
Is there a difference in the daily average number of customers between West Coast shops and East Coast shops?
Test the hypotheses H0: μWC = μEC vs. Ha: μWC μEC.
We will use significance level α = 0.01.
From StatCrunch, the P-value = 0.0028 < 0.01, so H0 is rejected.
29.Coffee Shop Customers (cont.)
Find the 98% confidence interval for μWC − μEC.
The confidence interval can be used to conduct a two-sided test with significance level α = 1 − C.
Since the confidence interval does not contain 0, we can reject the null hypothesis that μWC = μEC.
Using this method to conduct a test, however, does not provide a P-value. Knowing the P-value is important so that you know the strength of your evidence, and not just whether it rejects H0.
It is possible to modify this method in order to conduct a one-sided test instead. (Use C = 1 − 2α and reject H0 only if the data agree with Ha.)
30.Pooled two-sample procedures
There are two versions of the two-sample t-test: one assuming equal variances (“pooled 2-sample test”) and one not assuming equal variances (“unpooled 2-sample test”) for the two populations. They have slightly different formulas and degrees of freedom.
Two normally distributed populations with unequal variances
The pooled (equal variance) two-sample t-test is mathematically exact.
However, the assumption of equal variance is hard to check, and thus the unpooled (unequal variance) t-test is safer.
In fact, the two tests give very similar results when the sample variances are not very different.
31.When both population have the same standard deviation σ, the pooled estimator of σ2 is:
sp replaces s1 and s2 in the standard error computation. The sampling distribution for the t-statistic is the t distribution with (n1 + n2 − 2) degrees of freedom.
A level C confidence interval for µ1 − µ2 is
(with area C between −t* and t*)
To test the hypothesis H0: µ1 = µ2 against a one-sided or a two-sided alternative, compute the pooled two-sample t statistic for the t(n1 + n2 − 2) distribution.
32.Which type of test? One sample, paired samples or two independent samples?
Comparing vitamin content of bread immediately after baking vs. 3 days later (the same loaves are used on day one and 3 days later).
Comparing vitamin content of bread immediately after baking vs. 3 days later (tests made on independent loaves).
Average fuel efficiency for 2005 vehicles is 21 miles per gallon. Is average fuel efficiency higher in the new generation “green vehicles”?
Is blood pressure altered by use of an oral contraceptive? Comparing a group of women not using an oral contraceptive with a group taking it.
Review insurance records for dollar amount paid after fire damage in houses equipped with a fire extinguisher vs. houses without one. Was there a difference in the average dollar amount paid?
33.Cautions about the two sample t-test or interval
Using the correct standard error and degrees of freedom is critical.
As in the one sample t-test, the method assumes simple random samples.
Likewise, it also assumes the populations have normal distributions.
Skewness and outliers can make the methods inaccurate (that is, having confidence/significance level other that what they are supposed to have).
The larger the sample sizes, the less this is a problem.
It also is less of a problem if the populations have similar skewness and the two samples are close to the same size.
“Significant effect” merely means we have sufficient evidence to say the two true means are different. It does not explain why they are different or how meaningful/important the difference is.
A confidence interval is needed to determine how big the effect is.
34.To see how skewness affects statistical inference, we can do some simulations.
We use data from the “exponential” distribution, which is highly skewed.
In StatCrunch: [Data-Simulate-Exponential, enter mean value = 1]
Hazards with skewness
35.Hazards with skewness, cont.
We now simulate 1000 samples of size n = 25 and compute the t-statistic for each sample.
In StatCrunch: [Data-Simulate-Exponential, enter 25 rows, 1000 columns, mean value = 1, and statistic sqrt(25)*(mean(Exponential)-1)/std(Exponential)]
With df = 24, we find
t* = 2.064 for C = 95% and
t* = 2.492 for C = 98%.
But the corresponding percentiles of the actual sampling distribution are wildly different.
Only 93.5% of CI’s computed with C = 95% will contain the true mean.