Info

Group 2: 59 106 174 207 219 237 313 365 458 497 515 529 557 615 625 645 973 1065 3215

(These data were generously supplied by E. Dana and reflect the time participants could keep a portion of an apparatus in contact with a specified target.) Comparing means with Welch's heteroscedastic test, the significance level is .475. With Yuen's test, the significance level is .053.

Judging Sample Sizes When Using Robust Estimators

Stein-type methods provide a way of judging the adequacy of a sample size based on data available. If a nonsignificant result is obtained, again there is the issue of whether this is due to low power based on the available sample size. Under normality, and when working with means, this issue can be addressed with Stein-type methods, but how might such techniques be extended to other measures of location? Coming up with reasonable methods for estimating power, based on estimated standard errors, is a fairly trivial matter thanks to modern technology, and in fact there are many methods one might use with robust measures of location. For example, theoretical results suggest how to extend Stein-type methods to trimmed means, but finding a method that performs reasonably well with small or even moderately large sample sizes is quite another matter. One practical difficulty is that the resulting methods tend to be biased and that they can be relatively inaccurate. For example, suppose that based on n observations from each group being compared, the standard error for each group is estimated, yielding an estimate of how much power there is based on the observations available. For convenience, let 7 be some estimate of 7, the true amount of power. Of course there will be some discrepancy between 7 and 7, and typically it seems that this discrepancy can be quite high. The problem is that estimated standard errors are themselves inaccurate. That is, if the true standard errors were known, methods for estimating power can be devised, but because they are estimated, 7 can be rather unsatisfactory. Moreover, methods for deriving an appropriate estimate of 7 usually are biased. Even when a reasonably unbiased estimator has been found, what is needed is some method for assessing the accuracy of 7. That is, how might a confidence interval for 7 be computed based on the data available? Again, solutions are available, but the challenge is finding methods for which the precision of 7 can be assessed in an adequate manner with small to moderate sample sizes.

A method that performs relatively well when working with 20% trimmed means is described by Wilcox and Keselman (in press). It is limited, however, to the one- and two-sample case. A comparable method when comparing more than two groups remains to be developed. The method, along with easy-to-use software, is described in Wilcox (in press) as well.

The method just mentioned could be extended to MOM and M-estimators, but nothing is known about its small-sample properties. This area is in need of further research.

Rank-Based Methods and Outliers

Yet another approach to low power due to outliers is to switch to some rank-based method, but as already noted, modern heteroscedastic methods are recommended over more traditional homoscedastic techniques. Ranks are assigned to observations by putting the observations in ascending order, assigning a rank of 1 to the smallest value, a rank of 2 to the next smallest, and so on. So regardless of how extreme an outlier might be, its rank depends only on its relative position among the ordered values. Consider, for example, the values 198, 199, 245, 250, 301, and 320. The value 198 has a rank of one. But if this smallest value were 2 instead, 2 is an outlier, but its rank is still one, so when using a rank-based method to compare groups, power is not affected. A summary of modern rank-based methods, developed after 1980, can be found in Wilcox (in press).

0 0

Post a comment