Trial-to-trial dynamics and learning in a generalized, redundant reaching task

Research output: Contribution to journalArticle

21 Citations (Scopus)

Abstract

If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and DT = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the DT task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the DTtask. Conversely, learning the DTtask first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.

Original languageEnglish (US)
Pages (from-to)225-237
Number of pages13
JournalJournal of neurophysiology
Volume109
Issue number1
DOIs
StatePublished - Jan 1 2013

Fingerprint

Learning
Dilatation and Curettage

All Science Journal Classification (ASJC) codes

  • Neuroscience(all)
  • Physiology

Cite this

@article{a64bec49690743bf8f0405c3813d3f2b,
title = "Trial-to-trial dynamics and learning in a generalized, redundant reaching task",
abstract = "If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and DT = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the DT task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the DTtask. Conversely, learning the DTtask first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.",
author = "Dingwell, {Jonathan B.} and Smallwood, {Rachel F.} and Cusumano, {Joseph P.}",
year = "2013",
month = "1",
day = "1",
doi = "10.1152/jn.00951.2011",
language = "English (US)",
volume = "109",
pages = "225--237",
journal = "Journal of Neurophysiology",
issn = "0022-3077",
publisher = "American Physiological Society",
number = "1",

}

Trial-to-trial dynamics and learning in a generalized, redundant reaching task. / Dingwell, Jonathan B.; Smallwood, Rachel F.; Cusumano, Joseph P.

In: Journal of neurophysiology, Vol. 109, No. 1, 01.01.2013, p. 225-237.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Trial-to-trial dynamics and learning in a generalized, redundant reaching task

AU - Dingwell, Jonathan B.

AU - Smallwood, Rachel F.

AU - Cusumano, Joseph P.

PY - 2013/1/1

Y1 - 2013/1/1

N2 - If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and DT = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the DT task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the DTtask. Conversely, learning the DTtask first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.

AB - If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and DT = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the DT task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the DTtask. Conversely, learning the DTtask first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.

UR - http://www.scopus.com/inward/record.url?scp=84871862471&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84871862471&partnerID=8YFLogxK

U2 - 10.1152/jn.00951.2011

DO - 10.1152/jn.00951.2011

M3 - Article

C2 - 23054607

AN - SCOPUS:84871862471

VL - 109

SP - 225

EP - 237

JO - Journal of Neurophysiology

JF - Journal of Neurophysiology

SN - 0022-3077

IS - 1

ER -