Finding the location of feature points in 3D space from 2D vision data in structured environments has been done successfully for years and has been applied effectively on industrial robots. Miniature flying robots flying in unknown environments have stringent weight, space, and security constraints. For such vehicles, it has been attempted here to reduce the number of vision sensors to a single camera. At first, feature points are detected in the image using Harris corner detector, the measurements of which are then statistically corresponded across various images, using knowledge of vehicle's pose from onboard inertial measurement unit. First approach attempted is that of ego-motion perpendicular to camera axis and acceptable results for 3D feature point locations have been achieved. Next, except for a small region around the focus of expansion, forward translations along the camera axis have also been attempted with acceptable results, which is an improvement to the previous relevant work. The 3D location map of feature points thus obtained is utilizable for trajectory planning while ensuring collision avoidance through 3D space. Reduction of vision sensors to a single camera while utilizing minimum ego-motion space for 3D feature point location is a significant contribution of this work.
|Original language||English (US)|
|Number of pages||22|
|Journal||Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering|
|State||Published - Jul 2014|
All Science Journal Classification (ASJC) codes
- Aerospace Engineering
- Mechanical Engineering