TY - GEN
T1 - Deepfusion
T2 - 20th ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2019
AU - Xue, Hongfei
AU - Jiang, Wenjun
AU - Miao, Chenglin
AU - Yuan, Ye
AU - Ma, Fenglong
AU - Ma, Xin
AU - Wang, Yijiang
AU - Yao, Shuochao
AU - Xu, Wenyao
AU - Zhang, Aidong
AU - Su, Lu
N1 - Funding Information:
This work was supported in part by the US National Science Foundation under Grants IIS-1218393, IIS-1514204 and CNS-1652503. And we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.
Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/7/2
Y1 - 2019/7/2
N2 - In recent years, significant research efforts have been spent towards building intelligent and user-friendly IoT systems to enable a new generation of applications capable of performing complex sensing and recognition tasks. In many of such applications, there are usually multiple different sensors monitoring the same object. Each of these sensors can be regarded as an information source and provides us a unique “view” of the observed object. Intuitively, if we can combine the complementary information carried by multiple sensors, we will be able to improve the sensing performance. Towards this end, we propose DeepFusion, a unified multi-sensor deep learning framework, to learn informative representations of heterogeneous sensory data. DeepFusion can combine different sensors’ information weighted by the quality of their data and incorporate cross-sensor correlations, and thus can benefit a wide spectrum of IoT applications. To evaluate the proposed DeepFusion model, we set up two real-world human activity recognition testbeds using commercialized wearable and wireless sensing devices. Experiment results show that DeepFusion can outperform the state-of-the-art human activity recognition methods.
AB - In recent years, significant research efforts have been spent towards building intelligent and user-friendly IoT systems to enable a new generation of applications capable of performing complex sensing and recognition tasks. In many of such applications, there are usually multiple different sensors monitoring the same object. Each of these sensors can be regarded as an information source and provides us a unique “view” of the observed object. Intuitively, if we can combine the complementary information carried by multiple sensors, we will be able to improve the sensing performance. Towards this end, we propose DeepFusion, a unified multi-sensor deep learning framework, to learn informative representations of heterogeneous sensory data. DeepFusion can combine different sensors’ information weighted by the quality of their data and incorporate cross-sensor correlations, and thus can benefit a wide spectrum of IoT applications. To evaluate the proposed DeepFusion model, we set up two real-world human activity recognition testbeds using commercialized wearable and wireless sensing devices. Experiment results show that DeepFusion can outperform the state-of-the-art human activity recognition methods.
UR - http://www.scopus.com/inward/record.url?scp=85069797383&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85069797383&partnerID=8YFLogxK
U2 - 10.1145/3323679.3326513
DO - 10.1145/3323679.3326513
M3 - Conference contribution
AN - SCOPUS:85069797383
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
SP - 151
EP - 160
BT - Mobihoc 2019 - Proceedings of the 2019 20th ACM International Symposium on Mobile Ad Hoc Networking and Computing
PB - Association for Computing Machinery
Y2 - 2 July 2019 through 5 July 2019
ER -