4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences

Jesse Scott, Robert Collins, Christopher Funk, Yanxi Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages795-804
Number of pages10
ISBN (Electronic)9781538610343
DOIs
StatePublished - Jan 19 2018
Event16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Publication series

NameProceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
Volume2018-January

Other

Other16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Data acquisition
Video cameras
Synchronization

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

Scott, J., Collins, R., Funk, C., & Liu, Y. (2018). 4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 (pp. 795-804). (Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017; Vol. 2018-January). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCVW.2017.99
Scott, Jesse ; Collins, Robert ; Funk, Christopher ; Liu, Yanxi. / 4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences. Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 795-804 (Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017).
@inproceedings{c7df76a6cd174a5a84cdf976732e966d,
title = "4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences",
abstract = "We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.",
author = "Jesse Scott and Robert Collins and Christopher Funk and Yanxi Liu",
year = "2018",
month = "1",
day = "19",
doi = "10.1109/ICCVW.2017.99",
language = "English (US)",
series = "Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "795--804",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017",
address = "United States",

}

Scott, J, Collins, R, Funk, C & Liu, Y 2018, 4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences. in Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 795-804, 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCVW.2017.99

4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences. / Scott, Jesse; Collins, Robert; Funk, Christopher; Liu, Yanxi.

Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Institute of Electrical and Electronics Engineers Inc., 2018. p. 795-804 (Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017; Vol. 2018-January).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - 4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences

AU - Scott, Jesse

AU - Collins, Robert

AU - Funk, Christopher

AU - Liu, Yanxi

PY - 2018/1/19

Y1 - 2018/1/19

N2 - We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.

AB - We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.

UR - http://www.scopus.com/inward/record.url?scp=85046271215&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046271215&partnerID=8YFLogxK

U2 - 10.1109/ICCVW.2017.99

DO - 10.1109/ICCVW.2017.99

M3 - Conference contribution

AN - SCOPUS:85046271215

T3 - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

SP - 795

EP - 804

BT - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Scott J, Collins R, Funk C, Liu Y. 4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan Sequences. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Institute of Electrical and Electronics Engineers Inc. 2018. p. 795-804. (Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017). https://doi.org/10.1109/ICCVW.2017.99