Of the following research statistics, which one is designed to correct for chance agreement between two raters when measuring inter-rater reliability?
a. Cohen’s kappa
b. Cronbach’s alpha
c. Kuder-Richardson formula 20
d. Spearman-Brown formula
Already registered? Login
Not Account? Sign up
Enter your email address to reset your password
Back to Login? Click here