Kappa statistics

Introduction:

Kappa statistics involves establishing the inter and intra observer validity of datya, Kappa values range from 1 to 0 and less than zero, when the value of Kappa is equal to 1 then there is a complete agreement, if the number is equal to zero then there is no agreemtn but when the value is less than 0 then we say that there is negative agremetn. In our case we have four individuals who observe images on a laptop and report on the number of circles there are in each image, there are a total of 22 images.

Cohen Kappa is the most common measure of agreemtn, however it is only used when number of observers are two and that we either rate agree or disagree, yes or no. due to more than two raters in our case we will use the Fleiss Kappa, it can be used where we have a number of rates.

Method:

We first arrange our data in a table where the rows will represent the outcomes, the columns will represent the categories or the raters. IF WE ASSUME THE FOLLOWING THEN WE WILL GET OUR Feiss Kappa as follows:

a

1/20

Kappa statistics

b

c

d

1

1

2

3

4

2

4

3

2/20

Kappa statistics

2

1

3

3

4

3

0

4

2

2

3

3/20

Kappa statistics

3

5

3

0

0

7

6

7

0

1

2

4/20

Kappa statistics

We first get the totals for each column and the following table summarises the results:

a

b

c

d

1

1

2

5/20

Kappa statistics

3

4

2

4

3

2

1

3

3

6/20

Kappa statistics

4

3

0

4

2

2

3

3

5

7/20

Kappa statistics

3

0

0

7

6

7

0

1

2

8/20

Kappa statistics

total

20

11

12

17

60

Pc

0.333333

0.183333

0.2

0.283333

9/20

Kappa statistics

We get the add the totals for a, b, c and d and in this case it is 60, we then calculate Pc by dividing each column totals by 60, the results are summarized by the row Pc.

We then calculate the Pr for each row by using the following formula:

Pr = ((1/(n(n-1)) X (a2 +b2+c2+d2)

Where n is the total for the row, the following table summarizes the results:

a

b

c

d

10/20

Kappa statistics

Pr

1

1

2

3

4

0.2

2

4

3

11/20

Kappa statistics

2

1

0.222222

3

3

4

3

0

0.3

12/20

Kappa statistics

4

2

2

3

3

0.228571

5

3

0

0

7

13/20

Kappa statistics

0.8

6

7

0

1

2

0.88

total

20

14/20

Kappa statistics

11

12

17

60

2.630794

Pc

0.333333

0.183333

0.2

0.283333

15/20

Kappa statistics

To calculate K we use the following formula:

K = (P’ – P”)/(1-P”)

We derive P’ as follows:

P’ = ((1/(n)(n(n-1)) X (∑Pr) (n) (n-1)

Therefore our P’ will be calculated as follows:

P’ = ((1/(10)(10(10-1)) X (2.630794) (10) (10-1)

P’ = 0.263079

P” will be calculated as follows:

P” = aPc2 + bPc2+ cPc2+ dPc2

P” = 0.3333332 + 0.1833332+ 0.22+ 0.2833332

16/20

Kappa statistics

Therefore our P” value is = 0.265

K = (P’ – P”)/(1-P”)

K = (0.263079 – 0.265)/(1-0.265)

K = -0.00261

This means that we have a negative agreement in this case.

Results:

Expected and automated values

K = -1.10845

Expected and observed for each person

Individual 1

K= -0.298228295

17/20

Kappa statistics

Individual 2

K= -0.246395784

Individual 3

K = -0.242537241

Individual 4

K = -0.182966603

Automated and observed for each person:

Individual 1

K= -0.278431438

Individual 2

K= -0.224704388

18/20

Kappa statistics

Individual 3

K = -0.221652978

Individual 4

K = -0.16055671

Observed with observed for the same person (within an observer)

Individual 1

K= -0.58582

Individual 2

K= -0.51511

Individual 3

K = -0.50365

19/20

Kappa statistics

Individual 4

K = -0.44248

Observed and observed by different persons (between observer)

K = 0.728416

Discussion:

From the above values it is evident that the other entire Kappa values apart from the observed and and observed by different people are negative, negative values show that there is a negative agreement. The last value that is 0.72842 means that there is a positive agreement and a moderately strong agreement between observers

REFERENCE:

Fleiss J. (1991) Statistical methods for rates and proportions, John Wiley publishers, New York

20/20