## Implications

1. When secondary goals are added to an experiment there is a risk of over-specification. In other words, we ask too many questions from too few replicates. When an experiment is over-specified the assumptions of the statistical models can be compromised. For example, if we ask whether expression increases over time, a simple linear model may be sufficient, because there is an assumption of an approximately linear response. However, if we change the question to ask whether there is a change in expression, there is no longer an assumption of linearity. In this case an ANOVA would be appropriate, because it can detect transient changes in expression. Importantly, ANOVA analysis requires more time points and does not tell us if expression increased over time. Ideally, there should be enough time and concentration sampling points to account for all real responses. In practice, the number of samples that can be processed in an experiment is limited, and the experimental design is of necessity a compromise. The experimenter should keep the compromises in the experimental design in mind when interpreting the results.

2. "If you torture the data long enough it will confess." This adage, attributed to John Freund, points to an important yet frequently neglected part of experimental design: defining the end of data analysis. Defining the end of the analysis is distinguished from goal setting by the recognition that a given study may not satisfy the experimenters expectations. The natural temptation is to reanalyze the data, often with different methods, until the sought-after answer is evident. This post hoc approach can be very misleading. Statistical tests run serially do not have the same authentic probability of success as one test run in isolation. That is, if test after test is run until one test is 'significant' the value of that significance is suspect. Eventually, if a sufficient number of t tests are performed on random data, the probability of getting a t test of 95% confidence approaches 100%. More specifically, if 100 tests are made, the chances of getting at least one false positive for which the p value is 0.05 or lower is 99.4% (Cobb 1997). Serial analysis without multiple comparison correction is therefore inherently suspect. This effect is recognized and is addressed by multiple comparison correction techniques such as Bonferroni. With microarray data this problem is greatly magnified, due to the large number of transcripts and the comparatively small number of samples. Always remember that measuring 10 transcripts of five rats or 20 000 transcripts of five rats has only the statistical power of five. If your analysis of five rats does not definitively answer your key question, test more animals or use the information you gained to redesign the experiment. There is far more value in replicating the biology than manipulating the data.

3. Experimental design should be guided by the analysis you plan to apply. For example, a statistical model that assumes a continuous response such as linear regression fails if the response is not linear. But if there are too few time points, there is no indication that the model has failed and real results, such as rapid fluctuations in gene expression, may be missed.

4. The experimental design should match the technology employed. In two-colour microarray experiments all measurements are relative. To avoid forced pairing of samples and controls, it is common practice to use a pooled control on all arrays to facilitate relative measurements. In single-colour experiments, such as GeneChip® arrays, controls should not be pooled. Pooling controls reduces the power of the statistical analysis significantly. The optimal experimental design, in which there is an equal number of control and treated samples, is balanced. The decision to treat samples and controls as paired or not can be taken later at the analysis stage.

5. A hypothesis is usually tested by a statistical test. For any statistical test, sufficient replication is required to guarantee sufficient statistical power and confidence. By replication, we are referring to biological replicates such as rats. A simple approach to estimating the number of replicates required in an experiment is discussed in detail later.

6. Unexpected variables may become apparent. For example, in typical toxicological experiments the time of day when the animals are sacrificed is not important. However, it is well known that many genes are regulated by circadian rhythms and that the abundance of some transcripts varies widely during the course of a day (Ueda et al. 2002), so in microarray experiments time of day is a variable that needs to be controlled. With in vitro experiments, changes in culture conditions can be expected to induce expression of a large number of genes. In Figure 3.11,

Fig. 3.11 An unexpected event during a time series. In this example, gene expression from four replicates of a chemically treated cell line was perturbed when the medium was replaced on the third day. The expression pattern, although still linear enough to be observed, has a greatly reduced quality of linear fit. This begs the question of how many other almost-linear responses were missed due to the environmental event.

Fig. 3.11 An unexpected event during a time series. In this example, gene expression from four replicates of a chemically treated cell line was perturbed when the medium was replaced on the third day. The expression pattern, although still linear enough to be observed, has a greatly reduced quality of linear fit. This begs the question of how many other almost-linear responses were missed due to the environmental event.

the change in expression on day 4 was traced to changing the culture media on the previous day. True biological replication and controlled techniques serve to minimize these effects. When control is not possible, the only viable substitute is randomization and sufficient replication.

7. The cost of a microarray experiment has many components, such as labour, financial, and ethical when human donors or animal experiments are used. When increasing the number of replicates, these costs must be traded off against the increased power and sensitivity of the test. Clearly, these tradeoffs are very laboratory-dependent and also depend on the experimental system used. However, it is universally true that it is better to answer a single question well than to ask many poorly.

8. The cost of data analysis and data interpretation is usually underestimated. To reduce the number of arrays used, it is tempting to develop complex experimental designs in which many variables are measured simultaneously. However, increasing the design complexity increases the difficulty of sample treatment and data analysis. The complexity of biological interpretation also increases. These complex answers may require subsequent clarifying experimentation, which increases the overall cost of achieving the experimental goal. A better approach is to break a large experiment into more manageable smaller experiments. A pilot study is useful in this regard, because the experimenter can try to interpret the simplified experiment and potential issues can be addressed before committing to a large experiment.

## Leaving A Legacy

Learn how helping others benefits you and how you can begin accomplishing powerful goals in the process. Within this product you will learn the secrets behind having inner peace and inspiring others.

## Post a comment