A survey on the cooperative SLAM problem of multi-robots systems based on visual

Lü Qiang, LIN Huican, ZHANG Yang, MA Jianye

Science & Technology Review ›› 2015, Vol. 33 ›› Issue (23) : 110-115.

PDF(3157 KB)
PDF(3157 KB)
Science & Technology Review ›› 2015, Vol. 33 ›› Issue (23) : 110-115.

A survey on the cooperative SLAM problem of multi-robots systems based on visual

Author information -
Department of Control Engineering, Academy of Armored Forces Engineering, Beijing 100072, China

Abstract

Visual SLAM using only images as external information estimates the robot position while building the environment map. SLAM is a basic prerequisite for autonomous robots. Now it has been solved by using a laser or sonar sensor to build 2D map in a small dynamic environment. However, in a dynamic, wide range and complex environment there are still problems to be solved, and the use of vision as the basic external sensor is a new area of research. The use of computer vision techniques in visual SLAM, such as feature detection, characterization, feature matching, image recognition and recovery, has still much room for improvement. The paper offers a brief overview on visual SLAM about the latest and easy to understand technologies in the field. Multi-robot systems have many advantages over a single robot, which can improve the precision of SLAM system, and better adapt to the dynamic and complex environment. This paper expounds the methods of multi-robot SLAM, with emphasis on the map fusion methods.

Key words

multi-robots / SLAM / computer vision / map merging

Cite this article

Download Citations
Lü Qiang, LIN Huican, ZHANG Yang, MA Jianye. A survey on the cooperative SLAM problem of multi-robots systems based on visual[J]. Science & Technology Review, 2015, 33(23): 110-115

References

[1] Stachniss C. Robotic mapping and exploration[M]. Springer Tracts in Advanced Robotics, 2009.
[2] Durrant-Whyte H, Bailey T. Simultaneous localization and mapping:part I[J]. Robotics & Automation Magazine, IEEE, 2006, 13(2): 99-110.
[3] 陈卫东, 张飞. 移动机器人的同步自定位与地图创建研究进展[J]. 控制理论与应用, 2005(3): 455-460.
[4] 张亮. 移动机器人同步定位与地图重建算法研究[D]. 杭州: 浙江大学, 2009.
[5] Bailey T, Durrant-Whyte H. Simultaneous localization and mapping (SLAM): Part II[J]. Robotics & Automation Magazine, IEEE, 2006, 13(3): 108-117.
[6] 于金霞, 王璐, 蔡自兴. 未知环境中移动机器人自定位技术[M]. 北京: 电子工业出版社, 2011.
[7] Paz L M, Piniés P, Tardós J D, et al. Large-scale 6-DOF SLAM with stereo-in-hand[J]. Robotics, IEEE Transactions on, 2008, 24(5): 946-957.
[8] Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2007, 29(6): 1052-1067.
[9] Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, 2007: 225-234.
[10] Sáez J M, Escolano F. 6dof entropy minimization slam[C]//Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on. IEEE, 2006: 1548-1555.
[11] Piniés P, Tardós J D. Large-scale slam building conditionally independent local maps: Application to monocular vision[J]. Robotics, IEEE Transactions on, 2008, 24(5): 1094-1106.
[12] Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[M]//Computer Vision-ECCV 2014. Springer International Publishing, 2014: 834-849.
[13] Endres F, Hess J, Sturm J, et al. 3-d mapping with an rgb-d camera[J]. Robotics, IEEE Transactions on, 2014, 30(1): 177-187.
[14] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. arXiv preprint arXiv:1502.00956, 2015.
[15] Ferranti E, Trigoni N, Levene M. Brick& Mortar: an on-line multi-agent exploration algorithm[C]//Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007: 761-767.
[16] Whyte H, Baliey T. Simultaneous Localization and Mapping (SLAM) Part 1 The Essential Algorithms[J]. IEEE Robotics & Automation Magazine, 2006, 13(2): 99-110.
[17] Bailey T, Durrant-Whyte H. Simultaneous localization and mapping (SLAM): Part II[J]. IEEE Robotics & Automation Magazine, 2006, 13(3): 108-117.
[18] Castellanos J A, Neira J, Tardós J D. Multisensor fusion for simultaneous localization and map building[J]. Robotics and Automation, IEEE Transactions on, 2001, 17(6): 908-914.
[19] Majumder S, Scheding S, Durrant-Whyte H F. Sensor fusion and map building for underwater navigation[C]// Proceedings of Australian Conferenceon Ro- botics and Automation. 2000: 25-30.
[20] Nützi G, Weiss S, Scaramuzza D, et al. Fusion of IMU and vision for absolute scale estimation in monocular SLAM[J]. Journal of intelligent & robotic sys- tems, 2011, 61(1-4): 287-299.
[21] Se S, Lowe D, Little J. Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks[J]. The International Journal of Ro- botics Research, 2002, 21(8): 735-758.
[22] Olson C F, Matthies L H, Schoppers M, et al. Rover navigation using stereo ego-motion[J]. Robotics and Autonomous Systems, 2003, 43(4): 215-229.
[23] Hartley R, Zisserman A. Multiple view geometry in computer vision[M]. Cambridge: Cambridge University Press, 2003.
[24] Kaess M, Dellaert F. Probabilistic structure matching for visual SLAM with a multi-camera rig[J]. Computer Vision and Image Understanding, 2010, 114 (2): 286-296.
[25] Carrera G, Angeli A, Davison A J. SLAM-based automatic extrinsic calibration of a multi-camera rig[C]//Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011: 2652-2659.
[26] Davison A J, Cid Y G, Kita N. Real-time 3D SLAM with wide-angle vision[C]//Proc. IFAC/EURON Symp. Intelligent Autonomous Vehicles. 2004: 31- 33.
[27] Scaramuzza D, Siegwart R. Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles[J]. Robotics, IEEE Transactions on, 2008, 24(5): 1015-1026.
[28] Huang A S, Bachrach A, Henry P, et al. Visual odometry and mapping for autonomous flight using an RGB-D camera[C]//International Symposium on Robotics Research (ISRR). 2011: 1-16.
[29] Hu G, Huang S, Zhao L, et al. A robust rgb-d slam algorithm[C]//Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012: 1714-1719.
[30] Engelhard N, Endres F, Hess J, et al. Real-time 3D visual SLAM with a hand-held RGB-D camera[C/OL]. [2015-09-31]. http://vision.informatik.tumuenchen. de/_media/spezial/bib/engelhard11euron.pdf.
[31] Endres F, Hess J, Engelhard N, et al. An evaluation of the RGB-D SLAM system[C]//Robotics and Automation (ICRA), 2012 IEEE International Confer- ence on. IEEE, 2012: 1691-1696.
[32] Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//Computer Vision (ICCV), 2011 IEEE International Confer- ence on. IEEE, 2011: 2564-2571.
[33] Sturm J, Magnenat S, Engelhard N, et al. Towards a benchmark for RGB-D SLAM evaluation[C]. Proc. of the RGB-D Workshop on Advanced Reason- ing with Depth Cameras at Robotics: Science and Systems Conf.(RSS), Los Angeles, USA. 2011, 2: 3.
[34] Davison A J. Real-time simultaneous localisation and mapping with a single camera[C]//Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. IEEE, 2003: 1403-1410.
[35] Lemaire T, Berger C, Jung I K, et al. Vision-based slam: Stereo and monocular approaches[J]. International Journal of Computer Vision, 2007, 74(3): 343-364.
[36] Vidal-Calleja T, Bryson M, Sukkarieh S, et al. On the observability of bearing-only SLAM[C]//Robotics and Automation, 2007 IEEE International Confer- ence on. IEEE, 2007: 4114-4119.
[37] Gao X, Zhang T. Robust RGB-D simultaneous localization and mapping using planar point features[J]. Robotics & Autonomous Systems, 2015, 72:1-14.
[38] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
[39] Bay H, Tuytelaars T, Gool L V. SURF: Speeded up robust features[J]. Computer Vision & Image Understanding, 2006, 110(3): 404-417.
[40] Lepetit V, Ozuysal M, Trzcinski T, et al. BRIEF: Computing a local binary descriptor very fast[J]. IEEE Transactions on Software Engineering, 2011, 34 (7): 1281-1298.
[41] Huijuan Z, Qiong H. Fast image matching based-on improved SURF algorithm[C]//Electronics, Communications and Control (ICECC), 2011 International Conference on. IEEE, 2011: 1460-1463.
[42] Chli M, Davison A J. Active matching for visual tracking[J]. Robotics and Autonomous Systems, 2009, 57(12): 1173-1187.
[43] Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381-395.
[44] Pollefeys M, Van Gool L, Vergauwen M, et al. Visual modeling with a hand-held camera[J]. International Journal of Computer Vision, 2004, 59(3): 207- 232.
[45] Engels C, Stewénius H, Nistér D. Bundle adjustment rules[J]. Photogrammetric computer vision, 2006, 2: 124-131.
[46] Strasdat H, Montiel J M M, Davison A J. Real-time monocular slam: Why filter?[C]//Robotics and Automation (ICRA), 2010 IEEE International Confer- ence on. IEEE, 2010: 2657-2664.
[47] Fuentes-Pacheco J, Ruiz-Ascencio J, Rendón-Mancha J M. Visual simultaneous localization and mapping: a survey[J]. Artificial Intelligence Review, 2015, 43(1): 55-81.
[48] Marjovi A, Nunes J G, Marques L, et al. Multi-Robot Exploration and Fire Searching[C]// IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2009:1929-1934.
[49] 张国良, 汤文俊, 曾静, 等. 考虑通信状况的多机器人CSLAM问题综述[J]. 自动化学报, 2014, 40(10): 2073-2088.
[50] Michael N, Shen S, Mohta K, et al. Collaborative mapping of an earthquake-damaged building via ground and aerial robots[J]. Journal of Field Robotics, 2012, 29(5): 832-841.
[51] Lee H C, Lee S H, Choi M H, et al. Probabilistic map merging for multi-robot RBPF-SLAM with unknown initial poses[J]. Robotica, 2012, 30(2): 205- 220.
[52] Gil A, Reinoso O, Ballesta M, et al. Multi-robot visual slam using a rao-blackwellized particle filter. Robotics and Autonomous Systems, 2010, 58(1): 68-80.
[53] Vidal-Calleja T A, Berger C, Sola J, et al. Large scale multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain. Robotics and Autonomous Systems, 2011, 59(9): 654-674.
[54] Benedettelli D, Garulli A, Giannitrapani A. Cooperative SLAM using M-Space representation of linear features[J]. Robotics and Autonomous Systems, 2012, 60(10): 1267-1278.
[55] Zhou X S, Roumeliotis S. Multi-robot SLAM with unknown initial correspondence: The robot rendezvous case[C]//Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEE, 2006: 1785-1792.
[56] Forster C, Pizzoli M, Scaramuzza D. Air-ground localization and map augmentation using monocular dense reconstruction[C]//Intelligent Robots and Sys- tems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, 2013: 3971-3978.
[57] Saeedi S, Paull L, Trentini M, et al. Neural network-based multiple robot simultaneous localization and mapping[J]. IEEE Transactions on Neural Net- works, 2011, 22(12): 2376-2387.
[58] Saeedi S, Paull L, Trentini M, et al. Map merging for multiple robots using Hough peak matching[J]. Robotics and Autonomous Systems, 2014, 62(10): 1408-1424.
[59] Kostavelis I, Gasteratos A. Learning spatially semantic representations for cognitive robot navigation[J]. Robotics and Autonomous Systems, 2013, 61(12): 1460-1475.
[60] Bo L, Ren X, Fox D. Learning hierarchical sparse features for rgb-d object recognition[J]. International Journal of Robotics Research, 2014, 33(4): 581- 599.
PDF(3157 KB)

310

Accesses

0

Citation

Detail

Sections
Recommended

/