Is there an "Almost Perfect" agreement in a classification?
Listen now
Description
In this podcast I discuss the extensive use of the Table Strength of Agreement based on different Kappa values, provided by: Landis, J.R. and Koch, G.G., 1977. The measurement of observer agreement for categorical data. Biometrics, pp.159-174. According to Google Scholar, this paper has more than 53.000 citations (up to October, 2019). In my opinion this table has been used sometimes with a different purpose than the original paper, which, according to the authors, "have been illustrated with an example involving  only two observers", and "these divisions are clearly arbitrary". The original paper is available at https://www.jstor.org/stable/pdf/2529310.pdf Follow my podcast: http://anchor.fm/tkorting Subscribe to my YouTube channel: http://youtube.com/tkorting The intro and the final sounds were recorded at my home, using an old clock that belonged to my grandmother. Thanks for listening
More Episodes
In this podcast I provide a detailed discussion of what is Data Science. In Part 2 I will continue... Follow my podcast: http://anchor.fm/tkorting Subscribe to my YouTube channel: http://youtube.com/tkorting The intro and the final sounds were recorded at my home, using an old clock that belonged...
Published 12/11/20
Published 12/11/20
Deep Learning articles use benchmarks to measure the quality of the results. However, several benchmarks do not have the copyright of all data used. So, how to believe that every paper uses the same benchmark? From https://www.go-fair.org/fair-principles/ we have the description of the FAIR...
Published 12/29/19