A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams

Vinayak Elangovan, Amir Shirkhodaie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Recognition and understanding of group activities can significantly improve situational awareness in Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we describe a framework for extraction and management of pertinent features/attributes vital for annotation of group activities reliably. A robust technique is proposed for fusion of generated events and entities' attributes from video streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed modified TML data structure which facilitates seamless fusion of extracted information from video streams.

Original languageEnglish (US)
Title of host publicationSignal Processing, Sensor Fusion, and Target Recognition XXII
DOIs
StatePublished - Aug 12 2013
EventSignal Processing, Sensor Fusion, and Target Recognition XXII - Baltimore, MD, United States
Duration: Apr 29 2013May 2 2013

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume8745
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Other

OtherSignal Processing, Sensor Fusion, and Target Recognition XXII
CountryUnited States
CityBaltimore, MD
Period4/29/135/2/13

Fingerprint

annotations
Markup languages
Semantic Annotation
semantics
Transducers
document markup languages
Fusion reactions
Semantics
Attribute
Transducer
Fusion
transducers
fusion
messages
surveillance
Surveillance
Annotation
Data structures
Agglomeration
situational awareness

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Cite this

Elangovan, V., & Shirkhodaie, A. (2013). A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams. In Signal Processing, Sensor Fusion, and Target Recognition XXII [87450M] (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 8745). https://doi.org/10.1117/12.2018626
Elangovan, Vinayak ; Shirkhodaie, Amir. / A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams. Signal Processing, Sensor Fusion, and Target Recognition XXII. 2013. (Proceedings of SPIE - The International Society for Optical Engineering).
@inproceedings{4bf22395720e4408b5d829d0505f733e,
title = "A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams",
abstract = "Recognition and understanding of group activities can significantly improve situational awareness in Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we describe a framework for extraction and management of pertinent features/attributes vital for annotation of group activities reliably. A robust technique is proposed for fusion of generated events and entities' attributes from video streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed modified TML data structure which facilitates seamless fusion of extracted information from video streams.",
author = "Vinayak Elangovan and Amir Shirkhodaie",
year = "2013",
month = "8",
day = "12",
doi = "10.1117/12.2018626",
language = "English (US)",
isbn = "9780819495365",
series = "Proceedings of SPIE - The International Society for Optical Engineering",
booktitle = "Signal Processing, Sensor Fusion, and Target Recognition XXII",

}

Elangovan, V & Shirkhodaie, A 2013, A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams. in Signal Processing, Sensor Fusion, and Target Recognition XXII., 87450M, Proceedings of SPIE - The International Society for Optical Engineering, vol. 8745, Signal Processing, Sensor Fusion, and Target Recognition XXII, Baltimore, MD, United States, 4/29/13. https://doi.org/10.1117/12.2018626

A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams. / Elangovan, Vinayak; Shirkhodaie, Amir.

Signal Processing, Sensor Fusion, and Target Recognition XXII. 2013. 87450M (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 8745).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams

AU - Elangovan, Vinayak

AU - Shirkhodaie, Amir

PY - 2013/8/12

Y1 - 2013/8/12

N2 - Recognition and understanding of group activities can significantly improve situational awareness in Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we describe a framework for extraction and management of pertinent features/attributes vital for annotation of group activities reliably. A robust technique is proposed for fusion of generated events and entities' attributes from video streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed modified TML data structure which facilitates seamless fusion of extracted information from video streams.

AB - Recognition and understanding of group activities can significantly improve situational awareness in Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we describe a framework for extraction and management of pertinent features/attributes vital for annotation of group activities reliably. A robust technique is proposed for fusion of generated events and entities' attributes from video streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed modified TML data structure which facilitates seamless fusion of extracted information from video streams.

UR - http://www.scopus.com/inward/record.url?scp=84881185087&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84881185087&partnerID=8YFLogxK

U2 - 10.1117/12.2018626

DO - 10.1117/12.2018626

M3 - Conference contribution

AN - SCOPUS:84881185087

SN - 9780819495365

T3 - Proceedings of SPIE - The International Society for Optical Engineering

BT - Signal Processing, Sensor Fusion, and Target Recognition XXII

ER -

Elangovan V, Shirkhodaie A. A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams. In Signal Processing, Sensor Fusion, and Target Recognition XXII. 2013. 87450M. (Proceedings of SPIE - The International Society for Optical Engineering). https://doi.org/10.1117/12.2018626