Several years ago, the American Psychological Association began requiring that effect size estimates be reported to provide a better indication of the associative strength between factors and dependent measures in empirical studies (Publication manual of the American Psychological Association, 2010, Author, Washington, DC). Accordingly, developmental journals require/strongly recommend effect size estimates be included in published work. Potentially, this trend has important benefits for infancy research given some of the inherent difficulties in establishing conceptually strong findings when often facing highly variable performance in typically small samples. This study examined recent infant research from select journals for accuracy and interpretative value of effect size estimates. Demographics, sample size, design, and statistical data were coded from 158 published (2007-2012) articles presenting 878 effect size estimates from experimental findings with infants using behavioral methods. Descriptive and distribution statistics were calculated for the following variables: (1) statistical tests, (2) effect size parameters, and (3) effect size interpretations. Although partial eta squared (ηp2) and eta squared (η2) were most common (49 and 42%, respectively), "η confusion" was apparent, and interpretation of effect size estimates was virtually nonexistent. Thus, effect size estimates are not impacting infant development research in spite of criticisms of sole dependence on null hypothesis (e.g. American Psychologist, 49, 1994 and 997). Suggestions for increasing accuracy of effect size estimate selection and interpretative effect size estimate cutoffs are offered to improve empirical clarity.
All Science Journal Classification (ASJC) codes
- Pediatrics, Perinatology, and Child Health
- Developmental and Educational Psychology