Interobserver Agreement R

Cohen (1968) proposes an alternative weighted kappa that allows researchers to penalize differences differently because of the magnitude of differences. Cohen`s weighted Kappa is generally used for category data with an ordinal structure, for example. B in an evaluation system that categorizes the high, medium or low presence of a particular attribute. In this case, a subject considered high by one coder and low by another should lead to a lower estimate of the ERREUR than that of a subject considered high by one coder and another by another. Norman and Streiner (2008) show that the use of a weighted cappa with square weights for ordination scales is identical to a single two-sided mixed ICC and can be replaced. This interchangeability is a particular advantage if three or more coders are used in a study, as CCIs may contain three or more coders, while weighted kappa can only contain two codes (Norman-Streiner, 2008). The CCI evaluation (McGraw- Wong, 1996) was conducted using an inter-mediated CCI to assess the extent to which coders provided consistency in their sensitivity beyond the subjects. The resulting CCI was in the excellent ICC range of 0.96 (Cicchetti, 1994), indicating that the coders had a high degree of convergence and indicate that empathy was assessed similarly in donors. The high CCI suggests that independent coders have introduced a minimum amount of measurement errors and that, therefore, statistical performance for subsequent analyses is not significantly reduced. Empathy assessments were therefore deemed appropriate to be used in the hypothesis testing of the hypothesis in this study.

Subsequent extensions of the approach included versions that could deal with “under-credits” and ordinal scales. [7] These extensions converge with the intra-class correlation family (ICC), which allows us to estimate reliability for each level of measurement, from the notion (kappa) to the ordinal (or ICC) at the interval (ICC or ordinal kappa) and the ratio (ICC). There are also variations that may consider the agreement by the evaluators on a number of points (for example.B. two people agree on the rates of depression for all points of the same semi-structured interview for a case?) as well as cases of raters x (for example. B how do two or more evaluators agree on whether 30 cases have a diagnosis of depression, yes/no a nominal variable).

This entry was posted in Uncategorized. Bookmark the permalink.