Achieving full-view coverage in camera sensor networks

Yi Wang, Guohong Cao

Research output: Contribution to journalArticle

25 Citations (Scopus)

Abstract

Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular latticebased deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.

Original languageEnglish (US)
Article number3
JournalACM Transactions on Sensor Networks
Volume10
Issue number1
DOIs
StatePublished - Nov 1 2013

Fingerprint

Sensor networks
Cameras
Sensors

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications

Cite this

@article{5417aa19bd7c444d8b5df7c64204dc7a,
title = "Achieving full-view coverage in camera sensor networks",
abstract = "Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular latticebased deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.",
author = "Yi Wang and Guohong Cao",
year = "2013",
month = "11",
day = "1",
doi = "10.1145/2529974",
language = "English (US)",
volume = "10",
journal = "ACM Transactions on Sensor Networks",
issn = "1550-4859",
publisher = "Association for Computing Machinery (ACM)",
number = "1",

}

Achieving full-view coverage in camera sensor networks. / Wang, Yi; Cao, Guohong.

In: ACM Transactions on Sensor Networks, Vol. 10, No. 1, 3, 01.11.2013.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Achieving full-view coverage in camera sensor networks

AU - Wang, Yi

AU - Cao, Guohong

PY - 2013/11/1

Y1 - 2013/11/1

N2 - Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular latticebased deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.

AB - Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular latticebased deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.

UR - http://www.scopus.com/inward/record.url?scp=84890424014&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84890424014&partnerID=8YFLogxK

U2 - 10.1145/2529974

DO - 10.1145/2529974

M3 - Article

AN - SCOPUS:84890424014

VL - 10

JO - ACM Transactions on Sensor Networks

JF - ACM Transactions on Sensor Networks

SN - 1550-4859

IS - 1

M1 - 3

ER -