The fundamental assumption underlying the use of 360-degree assessments is that ratings from different sources provide unique and meaningful information about the target manager's performance. Extant research appears to support this assumption by demonstrating low correlations between rating sources. This article reexamines the support of this assumption, suggesting that past research has been distorted by a statistical artifact-restriction of variance in job performance. This artifact reduces the amount of between-target variance in ratings and attenuates traditional correlation-based estimates of rating similarity. Results obtained from a Monte Carlo simulation and two field studies support this restriction of variance hypothesis. Noncorrelation-based methods of assessing interrater agreement indicated that agreement between sources was about as high as agreement within sources. Thus, different sources did not appear to be furnishing substantially unique information. The authors conclude by questioning common practices in 360-degree assessments and offering suggestions for future research and application.
All Science Journal Classification (ASJC) codes
- Decision Sciences(all)
- Strategy and Management
- Management of Technology and Innovation