Linearity, sometimes jointly with constant variance, is routinely assumed in the context of sufficient dimension reduction. It is well understood that, when these conditions do not hold, blindly using them may lead to inconsistency in estimating the central subspace and the central mean subspace. Surprisingly, we discover that even if these conditions do hold, using them will bring efficiency loss. This paradoxical phenomenon is illustrated through sliced inverse regression and principal Hessian directions. The efficiency loss also applies to other dimension reduction procedures. We explain this empirical discovery by theoretical investigation.
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Agricultural and Biological Sciences (miscellaneous)
- Agricultural and Biological Sciences(all)
- Statistics, Probability and Uncertainty
- Applied Mathematics