Recall that the power of a test is the probability of correctly rejecting a false null hypothesis. This probability is inversely related to the probability of making a Type II error. Recall also that we choose the probability of making a Type I error when we set Alpha and that if we decrease the probability of making a Type I error we increase the probability of making a Type II error. The relationships are defined in the table below:
Power and Alpha
Thus, the probability of correctly retaining a true null has the same relationship to Type I errors as the probability of correctly rejecting an untrue null does to Type II error. Yet, as I mentioned if we decrease the odds of making one type of error we increase the odds of making the other type of error. What is the relationship between Type I and Type II errors? The following demonstration attempts to illustrate this concept. In this demonstration a one-tail one-sample t-test with 20 degrees of freedom is conducted at Alpha=.05.
As you can see, the probability of making a Type II error (and thus power) varies as a function of Alpha. The lower our Alpha the less likely we are to make a Type I error, but the more likely we are to make a Type II error. What other factors affect the power of a test?
Power and the True Difference Between Population Means
Anytime we test whether a sample differs from a population or whether two sample come from 2 separate populations, there is the assumption that each of the populations we are comparing has it's own mean and standard deviation (even if we do not know it). The distance between the two population means will affect the power of our test.
Power as a Function of Sample Size and Variance
You should notice in the last demonstration that what really made the difference in the size of Beta was how much overlap there was in the two distributions. When the means were close together the two distributions overlaped a great deal compared to when the means were farther apart. Thus, anything that effects the extent the two distributions share common values will increase Beta (the likelyhood of making a Type II error). In the following demonstration an increase in the variance (the spread of the distribution) shows a corresponding overlap in the two distributions and an increase in Beta.
Sample size has an indirect effect on power because it affects the measure of variance we use to calculate the t-test statistic. Since we are calculating the power of a test that involves the comparison of sample means, we will be more interested in the standard error (the average difference in sample values) than standard deviation or variance by itself. Thus, sample size is of interest because it modifies our estimate of the standard deviation. When N is large we will have a lower standard error than when N is small. In turn, when N is large well have a smaller Beta region than when N is small.