
The effect of Training on Interrater Reliability in Dream Content Analysis
Michael Schredl,Natalie Burchert,Yvonne Gabatin,
Sleep and Hypnosis: A Journal of Clinical Neuroscience and Psychopathology 2004;6(3):139-144
Content analysis is an important and a frequently applied tool in dream research. Hall and Van de Castle (1) stressed the importance of interrater reliability in the application of content analytic scales, i.e., how good is the agreement between two judges scoring the same dream material independently? The present study investigated the effect of rater training on the interrater reliability of scales developed by Schredl (2). Three samples of 100 dream reports each have been analyzed by two inexperienced judges who received two training sessions after coding 100 and 200 dreams. The results indicate that the training of raters has a positive effect on interrater reliability and the mean differences of some scales (nominal and interval scales) but not for ordinal scales (e.g., dream emotions). It remains unclear how much training is necessary for different scales and whether it might be necessary to improve the scales themselves if extensive training does not result in a desired improvement in interrater reliability. Thus, more studies investigating rater training for different systems of dream content analysis are needed.
Keywords:
dream content analysis, interrater reliability, rater training
dream content analysis, interrater reliability, rater training
GUIDE FOR AUTHORS
EDITORIAL BOARD
ABOUT JOURNAL
INDEXED IN
AHEAD OF PRINT
ARCHIVES
CURRENT ISSUE
CONTACT US

