This paper examines the strength of association between the outcomes of National Research Foundation (NRF) peer review based rating mechanisms, and a range of objective measures of performance of researchers. The analysis is conducted on 1932 scholars that have received an NRF rating or an NRF research chair. We find that on average scholars with higher NRF ratings record higher performance against research output and impact metrics. However, we also record anomalies in the probabilities of different NRF ratings when assessed against bibliometric performance measures, and record a disproportionately large incidence of scholars with high peer-review based ratings with low levels of recorded research output and impact. Moreover, we find strong cross-disciplinary differences in terms of the impact that objective levels of performance have on the probability of achieving different NRF ratings. Finally, we report evidence that NRF peer review is less likely to reward multi-authored research output than single-authored output. Claims of a lack of bias in NRF peer review are thus difficult to sustain.
All Science Journal Classification (ASJC) codes
- Social Sciences(all)
- Computer Science Applications
- Library and Information Sciences