Some Considerations

Screening for violation of assumptions can be conducted in several different ways. Relevant issues in the choice of when and how to screen depend on the level of measurement of the variables, whether the design produces grouped or un-grouped data, whether cases provide a single response or more than one response, and whether the variables themselves or the residuals of analysis are screened.

Level of Measurement: Continuous, Ordinal, and Discrete Variables

One consideration in preparatory data analysis is whether the variables are continuous, ordinal, or discrete. Continuous variables are also referred to as interval or ratio; discrete variables are also called categorical or nominal; discrete variables with only two levels are often called dichotomous. Continuous variables assess the amount of something along a continuum of possible values where the size of the observed value depends on the sensitivity of the measuring device. As the measuring device becomes more sensitive, so does the precision with which the variable is assessed. Examples of continuous variables are time to complete a task, amount of fabric used in various manufacturing processes, or numerical score on an essay exam. Most of the assumptions of analysis apply to continuous variables.

Rank-order/ordinal data are obtained when the researcher assesses the relative positions of cases in a distribution of cases (e.g., most talented, least efficient), when the researcher has others rank order several items (e.g., most important to me), or when the researcher has assessed numerical scores for cases but does not trust them. In the last instance, the researcher believes that the case with the highest score has the most (or least) of something but is not comfortable analyzing the numerical scores themselves, so the data are treated as ordinal. Numbers reveal which case is in what position, but there is no assurance that the distance between the first and second cases is the same as, for instance, the distance between the second and third cases, or any other adjacent pair. Only a few statistical methods are available for analysis of ordinal variables, and they tend to have few or no assumptions (Siegel & Castellan, 1988).

Discrete variables are classified into categories. There are usually only a few categories, chosen so that every case can be classified into only one of them. For instance, employees are classified as properly trained or not; eggs are divided into medium, large, and extra large; respondents answer either "yes" or "no"; manufactured parts either pass or do not pass quality control; or dessert choice is sorbet, tiramisu, chocolate mousse, or apple tart. In many analyses, discrete variables are the grouping variables (treatment group vs. control) for a main analysis such as analysis of variance (ANOVA) or logistic regression. Assumptions for discrete variables relate to the frequency of cases in the various categories. Problems arise when there are too few cases in some of the categories, as discussed later.

Grouped and Ungrouped Research Designs

Assumptions are assessed differently depending on whether the data are to be grouped or ungrouped during analysis. The most common goal in grouped analyses is to compare the central tendency in two or more groups; the most common goal in ungrouped analyses is to study relationships among variables. Grouped data are appropriately analyzed using univariate or multivariate analysis of variance (ANOVA and MANOVA, including profile analysis of repeated measures), logistic regression, or discriminant analysis. Ungrouped data are analyzed through bivariate or multiple regression, canonical correlation, cluster analysis, or factor analysis. Some techniques apply to either grouped or ungrouped data. For example, time-series analysis and survival analysis can be used to track behavior over time for a single group of cases or to compare behavior over time for different groups. Chi-square and multiway frequency analysis can be used to compare contingencies in responses among categorical variables for a single group or to look for differences in responses among different groups. Similarly, structural equations can be used to model responses of a single group or compare models among groups.

Tests of assumptions are performed differently depending on whether data are to be grouped or ungrouped during analysis. Basically, ungrouped data are examined as a single set, while grouped data are examined separately within each group or have entirely different criteria for assessing fit to some assumptions, as discussed later.

Single Versus Multiple Responses

Participants provide a single response in the classical between-subjects ANOVA or chi-square designs. In other designs participants may provide several responses, and those responses may be measured either on the same or on different scales. Multivariate statistical techniques deal with multiple responses on different scales and are analyzed using such methods as MANOVA, canonical correlation, discriminant analysis, factor analysis, and structural equation modeling.

Multiple responses on the same scale (e.g., pretest, posttest, and follow-up scores on a measure of depression) are generally considered to produce univariate statistical designs (e.g., within-subjects ANOVA), although they are sometimes treated multivariately. Having multiple responses complicates data screening because there are also relationships among those responses to consider.

Examining the Variables or the Residuals of Analysis

Another issue is whether the examination of assumptions is performed on the raw variables prior to analysis or whether the main analysis is performed and its residuals examined. Both procedures are likely to uncover the same problems. For example, a peculiar score (an outlier) can be identified initially as a deviant score in its own distribution or as a score with a large residual that is not fit well by the solution.

Temptation is a major difference between these two alternatives. When residuals are examined after the main analysis is performed, the results of the main analysis are also available for inspection. If the results are the desired ones, it is tempting to see no problems with the residuals. If the results are not the desired ones, it is tempting to begin to play with the variables to see what happens to the results. On the other hand, when the assumptions are assessed and decisions are made about how to handle violations prior to the main analysis, there is less opportunity for temptation to influence the results that are accepted and reported.

Even if raw variables are screened before analysis, it is usually worthwhile to examine residuals of the main analysis for insights into the degree to which the final model has captured the nuances of the data. In what ways does the model fail to fit or "explain" the data? Are there types of cases to which the model does not generalize? Is further research necessary to find out why the model fails to fit these cases? Did the preparatory tests of assumptions fail to uncover violations that are only evident in direct examination of residuals (Wilkinson et al., 1999)?


These assumptions apply to a single variable for which a confidence interval is desired or, more commonly, to a single continuous dependent variable (DV) measured for each participant in the two or more groups that constitute the independent variable (IV). We illustrate both statistical and graphical methods of assessing the various assumptions.

Normality of Individual Variables (or the Residuals)

Several statistical and graphical methods are available to assess the normality of raw scores in ungrouped data or the normality of residuals of analysis. The next section contains guidelines for normality in grouped data.

Recall that normal distributions are symmetrical about the mean with a well-defined shape and height. Mean, median, and mode are the same, and the percentages of cases between the mean and various standard deviation units from the mean are known. For this reason, you can rescale a normally distributed continuous variable to a z score (with mean 0 and standard deviation 1) and look up the probability that corresponds to a particular range of raw scores in a table with a title such as "standard normal deviates" or "areas under the normal curve." The legitimacy of using the z-score transformation and its associated probabilities depends on the normality of the distribution of the continuous variable.

Although it is tempting to conclude that most inferential statistics are robust to violations of normality, that conclusion is not warranted. Bradley (1982) reported that statistical inference becomes less robust as distributions depart from normality—and rapidly so under many conditions. And even with a purely descriptive study, normality of variables (as well as pair-wise linearity and homoscedasticity, discussed in the section titled "Multivariate Assumptions") enhances the analysis, particularly when individual variables are nonnormal to varying degrees and in varying directions.

Skewness and kurtosis are statistics for assessing the symmetry (skewness) and peakedness (kurtosis) of a distribution. A distribution with positive skewness has a few cases with large values that lengthen the right tail; a distribution with negative skewness has a few cases with small values that lengthen the left tail. A distribution with positive kurtosis is too peaked (leptokurtic); a distribution with negative kurtosis is too flat (platykurtic—think "flatty"). A normal distribution is called mesokurtic. Nonnormal distributions have different percentages of cases between various standard deviation units than does the normal distribution, so z-score transformations and inferential tests applied to variables with nonnormal distributions are often misleading. Figure 5.1 shows a normal curve and several that depart from normality.

In a normal distribution, skewness and kurtosis are zero. The standard error of skewness is

The standard error of kurtosis is


Positive Skewness

Negative Skewness

Positive Skewness

Negative Skewness

Positive Kurtosis

Negative Kurtosis

Figure 5.1 Normal distribution, distributions with skewness, and distributions with kurtosis. Reprinted with permission of Tabachnick and Fidell (2001b), Using multivariate statistics (Boston: Allyn and Bacon).

Positive Kurtosis

Negative Kurtosis

Figure 5.1 Normal distribution, distributions with skewness, and distributions with kurtosis. Reprinted with permission of Tabachnick and Fidell (2001b), Using multivariate statistics (Boston: Allyn and Bacon).

For the fictitious data of DESCRPT.* (downloaded from where N = 50 for all variables, the standard errors of skewness and kurtosis are

0 0

Post a comment