Some Basic Experimental Design Concepts

Experimental design is concerned with the skillful interrogation of nature. Unfortunately, nature is reluctant to reveal her secrets. Joan Fisher Box (1978) observed in her autobiography of her father, Ronald A. Fisher, "Far from behaving consistently, however, Nature appears vacillating, coy, and ambiguous in her answers" (p. 140). Her most effective tool for confusing researchers is variability—in particular, variability among participants or experimental units. But two can play the variability game. By comparing the variability among participants treated differently to the variability among participants treated alike, researchers can make informed choices between competing hypotheses in science and technology.

We must never underestimate nature—she is a formidable foe. Carefully designed and executed experiments are required to learn her secrets. An experimental design is a plan for assigning participants to experimental conditions and the statistical analysis associated with the plan (Kirk, 1995, p. 1). The design of an experiment involves a number of interrelated activities:

1. Formulation of statistical hypotheses that are germane to the scientific hypothesis. A statistical hypothesis is a statement about (a) one or more parameters of a population or (b) the functional form of a population. Statistical hypotheses are rarely identical to scientific hypotheses—they are testable formulations of scientific hypotheses.

2. Determination of the experimental conditions (independent variable) to be manipulated, the measurement (dependent variable) to be recorded, and the extraneous conditions (nuisance variables) that must be controlled.

3. Specification of the number of participants required and the population from which they will be sampled.

4. Specification of the procedure for assigning the participants to the experimental conditions.

5. Determination of the statistical analysis that will be performed.

In short, an experimental design identifies the independent, dependent, and nuisance variables and indicates the way in which the randomization and statistical aspects of an experiment are to be carried out.

Analysis of Variance

Analysis of variance (ANOVA) is a useful tool for understanding the variability in designed experiments. The seminal ideas for both ANOVA and experimental design can be traced to Ronald A. Fisher, a statistician who worked at the Rotham-sted Experimental Station. According to Box (1978, p. 100), Fisher developed the basic ideas of ANOVA between 1919 and 1925. The first hint of what was to come appeared in a 1918 paper in which Fisher partitioned the total variance of a human attribute into portions attributed to heredity, environment, and other factors. The analysis of variance table for a two-treatment factorial design appeared in a 1923 paper published with M. A. Mackenzie (Fisher & Mackenzie, 1923). Fisher referred to the table as a convenient way of arranging the arithmetic. In

1924 Fisher (1925) introduced the Latin square design in connection with a forest nursery experiment. The publication in

1925 of his classic textbook Statistical Methods for Research Workers and a short paper the following year (Fisher, 1926) presented all the essential ideas of analysis of variance. The textbook (Fisher, 1925, pp. 244-249) included a table of the critical values of the ANOVA test statistic in terms of a function called z, where z = 2 (ln aTreatment — ln a Error). The statis-

tics aTreatment and aError denote, respectively, treatment and error variance. Amore convenient form of Fisher's z table that did not require looking up log values was developed by

George Snedecor (1934). His critical values are expressed in

0 0

Post a comment