If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and DT = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the DT task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the DTtask. Conversely, learning the DTtask first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.
All Science Journal Classification (ASJC) codes