Another coefficient to measure rater agreement corrected for chance inflation is Cohen's kappa coefficient (k; Cohen, 1960). K is a flexible index that is applicable to dichotomous or polytomous variables involving two or more observers and is computed by with P0=^ Pu as observed proportion of identical i=i i ratings and Pe = ^ Pi+P+i as exPected proportion of
/=i nv agreement by arbitrary ratings, p.. = denotes the proportion of observations within each cell, whereas I denotes the number of categories. The proportion of observed agreement is computed by adding the number of times both raters agree. The proportion of expected chance agreement is computed by the sum of the product of the marginals for each cell of interest. In contrast to the %2 indices, K depends only on the agreement and is not affected by high nonagreement rates. For the data presented in Table 17.1a, /c is computed as
40 + 425
Was this article helpful?