Saturday, September 23, 2006

Lesson 9: ANOVA - ANalysis Of VAriance between groups

ANOVA - What is It?

An ANOVA (Analysis of Variance), sometimes called an F test, is closely related to the t test. The major difference is that, where the t test measures the difference between the means of two groups, an ANOVA tests the difference between the means of two or more groups.

A one-way ANOVA, or single factor ANOVA, tests differences between groups that are only classified on one independent variable. You can also use multiple independent variables and test for interactions using factorial ANOVA (see below). The advantage of using ANOVA rather than multiple t-tests is that it reduces the probability of a type-I error. Making multiple comparisons increases the likelihood of finding something by chance—making a type-I error. Let’s use socioeconomic status (SES) as an example. I have 8 levels of SES and I want to see if any of the eight groups are different from each other on their average happiness. In order to compare all of the means to each other, you would have to run 28 t tests. If your alpha is set at .05 for each test, times 28 tests, the new p is 1.4—you are virtually assured of making a type-I error. So, you across all those 28 tests you would find some significant differences between groups, but there are likely due to error. An ANOVA controls the overall error by testing all 8 means against each other at once, so your alpha remains at .05.

One potential drawback to an ANOVA is that you lose specificity: all an F tells you is that there is a significant difference between groups, not which groups are significantly different from each other. To test for this, you use a post-hoc comparison to find out where the differences are – which groups are significantly different from each other and which are not.

Some commonly used post-hoc comparisons are Scheffe’s and Tukey’s.

0 Comments:

Post a Comment

<< Home