Visual storytelling

Ting Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, Margaret Mitchell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

59 Citations (Scopus)

Abstract

We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND1 v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as provided in this dataset and the storytelling task, has the potential to move artificial intelligence from basic understandings of typical visual scenes towards more and more human-like understanding of grounded event structure and subjective expression.

Original languageEnglish (US)
Title of host publication2016 Conference of the North American Chapter of the Association for Computational Linguistics
Subtitle of host publicationHuman Language Technologies, NAACL HLT 2016 - Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages1233-1239
Number of pages7
ISBN (Electronic)9781941643914
StatePublished - Jan 1 2016
Event15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - San Diego, United States
Duration: Jun 12 2016Jun 17 2016

Publication series

Name2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference

Conference

Conference15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016
CountryUnited States
CitySan Diego
Period6/12/166/17/16

Fingerprint

Artificial intelligence
language
artificial intelligence
event
Language
Storytelling
Captions
Artificial Intelligence
Descriptive
Figurative
Benchmark
Event Structures
Modeling

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Huang, T. H., Ferraro, F., Mostafazadeh, N., Misra, I., Agrawal, A., Devlin, J., ... Mitchell, M. (2016). Visual storytelling. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference (pp. 1233-1239). (2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference). Association for Computational Linguistics (ACL).
Huang, Ting Hao ; Ferraro, Francis ; Mostafazadeh, Nasrin ; Misra, Ishan ; Agrawal, Aishwarya ; Devlin, Jacob ; Girshick, Ross ; He, Xiaodong ; Kohli, Pushmeet ; Batra, Dhruv ; Zitnick, C. Lawrence ; Parikh, Devi ; Vanderwende, Lucy ; Galley, Michel ; Mitchell, Margaret. / Visual storytelling. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. Association for Computational Linguistics (ACL), 2016. pp. 1233-1239 (2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference).
@inproceedings{e72fdaa41cbb4f2891df104901ced72b,
title = "Visual storytelling",
abstract = "We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND1 v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as provided in this dataset and the storytelling task, has the potential to move artificial intelligence from basic understandings of typical visual scenes towards more and more human-like understanding of grounded event structure and subjective expression.",
author = "Huang, {Ting Hao} and Francis Ferraro and Nasrin Mostafazadeh and Ishan Misra and Aishwarya Agrawal and Jacob Devlin and Ross Girshick and Xiaodong He and Pushmeet Kohli and Dhruv Batra and Zitnick, {C. Lawrence} and Devi Parikh and Lucy Vanderwende and Michel Galley and Margaret Mitchell",
year = "2016",
month = "1",
day = "1",
language = "English (US)",
series = "2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference",
publisher = "Association for Computational Linguistics (ACL)",
pages = "1233--1239",
booktitle = "2016 Conference of the North American Chapter of the Association for Computational Linguistics",

}

Huang, TH, Ferraro, F, Mostafazadeh, N, Misra, I, Agrawal, A, Devlin, J, Girshick, R, He, X, Kohli, P, Batra, D, Zitnick, CL, Parikh, D, Vanderwende, L, Galley, M & Mitchell, M 2016, Visual storytelling. in 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, Association for Computational Linguistics (ACL), pp. 1233-1239, 15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016, San Diego, United States, 6/12/16.

Visual storytelling. / Huang, Ting Hao; Ferraro, Francis; Mostafazadeh, Nasrin; Misra, Ishan; Agrawal, Aishwarya; Devlin, Jacob; Girshick, Ross; He, Xiaodong; Kohli, Pushmeet; Batra, Dhruv; Zitnick, C. Lawrence; Parikh, Devi; Vanderwende, Lucy; Galley, Michel; Mitchell, Margaret.

2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. Association for Computational Linguistics (ACL), 2016. p. 1233-1239 (2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Visual storytelling

AU - Huang, Ting Hao

AU - Ferraro, Francis

AU - Mostafazadeh, Nasrin

AU - Misra, Ishan

AU - Agrawal, Aishwarya

AU - Devlin, Jacob

AU - Girshick, Ross

AU - He, Xiaodong

AU - Kohli, Pushmeet

AU - Batra, Dhruv

AU - Zitnick, C. Lawrence

AU - Parikh, Devi

AU - Vanderwende, Lucy

AU - Galley, Michel

AU - Mitchell, Margaret

PY - 2016/1/1

Y1 - 2016/1/1

N2 - We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND1 v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as provided in this dataset and the storytelling task, has the potential to move artificial intelligence from basic understandings of typical visual scenes towards more and more human-like understanding of grounded event structure and subjective expression.

AB - We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND1 v.1, includes 81,743 unique photos in 20,211 sequences, aligned to both descriptive (caption) and story language. We establish several strong baselines for the storytelling task, and motivate an automatic metric to benchmark progress. Modelling concrete description as well as figurative and social language, as provided in this dataset and the storytelling task, has the potential to move artificial intelligence from basic understandings of typical visual scenes towards more and more human-like understanding of grounded event structure and subjective expression.

UR - http://www.scopus.com/inward/record.url?scp=84994137684&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84994137684&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84994137684

T3 - 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference

SP - 1233

EP - 1239

BT - 2016 Conference of the North American Chapter of the Association for Computational Linguistics

PB - Association for Computational Linguistics (ACL)

ER -

Huang TH, Ferraro F, Mostafazadeh N, Misra I, Agrawal A, Devlin J et al. Visual storytelling. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference. Association for Computational Linguistics (ACL). 2016. p. 1233-1239. (2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference).