Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

Jeff Rimland, Mark Edward Ballora

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

Original languageEnglish (US)
Title of host publicationNext-Generation Analyst II
PublisherSPIE
Volume9122
ISBN (Print)9781628410594
DOIs
StatePublished - 2014
EventNext-Generation Analyst II - Baltimore, MD, United States
Duration: May 6 2014May 6 2014

Other

OtherNext-Generation Analyst II
CountryUnited States
CityBaltimore, MD
Period5/6/145/6/14

Fingerprint

Sonification
Complex Event Processing
Speech synthesis
Acoustic waves
Synthesis
acoustics
synthesis
Processing
Speech Synthesis
Data compression
Smoke
Data communication systems
data compression
Visual Impairment
smoke
impairment
data transmission
Visualization
Acoustics
Display devices

All Science Journal Classification (ASJC) codes

  • Applied Mathematics
  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics

Cite this

Rimland, Jeff ; Ballora, Mark Edward. / Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data. Next-Generation Analyst II. Vol. 9122 SPIE, 2014.
@inproceedings{7c4b648233824ba89875e4c2d5511f24,
title = "Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data",
abstract = "The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of {"}big data{"} and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an {"}instrument{"} that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric {"}big data{"} compression and transmission.",
author = "Jeff Rimland and Ballora, {Mark Edward}",
year = "2014",
doi = "10.1117/12.2050344",
language = "English (US)",
isbn = "9781628410594",
volume = "9122",
booktitle = "Next-Generation Analyst II",
publisher = "SPIE",
address = "United States",

}

Rimland, J & Ballora, ME 2014, Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data. in Next-Generation Analyst II. vol. 9122, 912203, SPIE, Next-Generation Analyst II, Baltimore, MD, United States, 5/6/14. https://doi.org/10.1117/12.2050344

Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data. / Rimland, Jeff; Ballora, Mark Edward.

Next-Generation Analyst II. Vol. 9122 SPIE, 2014. 912203.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

AU - Rimland, Jeff

AU - Ballora, Mark Edward

PY - 2014

Y1 - 2014

N2 - The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

AB - The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

UR - http://www.scopus.com/inward/record.url?scp=84906329956&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84906329956&partnerID=8YFLogxK

U2 - 10.1117/12.2050344

DO - 10.1117/12.2050344

M3 - Conference contribution

SN - 9781628410594

VL - 9122

BT - Next-Generation Analyst II

PB - SPIE

ER -