Spescial Issues

Vision-based virtual reality and augmented reality fusion technology

  • NING Ruixin ,
  • ZHU Zunjie ,
  • SHAO Biyao ,
  • GONG Bingjian ,
  • YAN Chenggang
Expand
  • Laboratory of Intelligent Information Processing, College of Automation, Hangzhou Dianzi University, Hangzhou 310018, China

Received date: 2018-04-14

  Revised date: 2018-04-27

  Online published: 2018-05-19

Abstract

Visual-based virtual reality and augmented reality have their own developments, but the fusion development trend of the two is unavoidable in the future. SLAM (simultaneous localization ad mapping) is a major component of the application of virtual reality and augmented reality, but there are still many challenging issues in terms of robustness. This paper proposes a vision-based virtual reality and augmented reality fusion technology, which can analyze and solve the problems such as device selection, tracking, motion interference, and plane recognition in the 3D scene reconstruction process. Finally, the challenging issues in SLAM are discussed.

Cite this article

NING Ruixin , ZHU Zunjie , SHAO Biyao , GONG Bingjian , YAN Chenggang . Vision-based virtual reality and augmented reality fusion technology[J]. Science & Technology Review, 2018 , 36(9) : 25 -31 . DOI: 10.3981/j.issn.1000-7857.2018.09.003

References

[1] Yan C G, Zhang Y D, Xu J Z, et al. Efficient parallel framework for hevc motion estimation on manycore processors[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(12):2077-2089.
[2] Yan C G, Zhang Y D, Xu J Z, et al. A highly parallel framework for hevc coding unit partitioning tree decision on manycore processors[J]. IEEE Signal Processing letters, 2014, 24(5):573-576.
[3] Laurentini A. The visual hull concept for silhouette-based image understanding[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994, 16(2):150-162.
[4] Miller G, Hilton A. Exact view-dependent visual hulls[J]. International Conference on Pattern Recognition, 2006, 1(1):107-111.
[5] Sinha S N, Pollefeys M. Camera network calibration and synchronization from silhouettes in archived video[J]. International Journal of Computer Vision, 2010, 87(3):266-283.
[6] Ben-Artzi G, Kasten Y, Peleg S, et al. Camera calibration from dynamic silhouettes using motion barcodes[J]. Computer Science, 2015:4095-4103.
[7] Namdev R K, Kundu A, Krishna K M, et al. Motion segmentation of multiple objects from a freely moving monocular camera[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2012:4092-4099.
[8] Teichman A, Thrun S. Learning to segment and track in RGBD[J]. IEEE Transactions on Automation Science & Engineering, 2013, 10(4):841-852.
[9] Besl P J, Mckay N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002, 14(2):239-256.
[10] Furukawa Y, Curless B, Seitz S M, et al. Manhattan-world stereo[J]. IEEE Computer Vision and Pattern Recognition, doi:10.1109/CVPR. 2009.5206867.
[11] Aragues R, Carlone L, Sagues C, et al. Distributed centroid estimation from noisy relative measurements[J]. Systems & Control Letters, 2012, 61(7):773-779.
[12] Bibby C, Reid I. A hybrid SLAM representation for dynamic marine environments[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2010, 58(8):257-264.
[13] Agudo A, Morenonoguer F, Calvo B, et al. Sequential non-rigid structure from motion using physical priors[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2016, 38(5):979.
[14] Bloomenthal J, Wyvill B. Introduction to implicit surfaces[M]. San Francisco:Morgan Kaufmann Publishers, 1997, 22(9):213-234.
[15] Bailey T, Bryson M, Mu H, et al. Decentralised cooperative localisation for heterogeneous teams of mobile robots[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2011:2859-2865.
[16] Bardow P, Davison A J, Leutenegger S. Simultaneous optical flow and intensity estimation from an event camera[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society, 2016:884-892.
[17] Bosse M, Zlot R. Continuous 3D scan-matching with a spinning 2D laser[C]//IEEE International Conference on Robotics and Automation. Salt Lake City, Vtah, USA:Piscataway, NJ:IEEE, 2009:4244-4251.
[18] Brand C, Schuster M J, Hirschmuller H, et al. Stereo-vision based obstacle mapping for indoor/outdoor SLAM[C]//International Conference on Intelligent Robots and Systems. Piscataway, NJ:IEEE, 2014:1846-1853.
[19] Benosman R, Ieng S H, Clercq C, et al. Asynchronous frameless event-based optical flow[J]. Neural Networks the Official Journal of the International Neural Network Society, 2012, 27(3):32.
[20] Anderson S, Dellaert F, Barfoot T D. A hierarchical wavelet decomposition for continuous-time SLAM[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2014:373-380.
[21] Aragues R, Carlone L, Calafiore G, et al. Multi-agent localization from noisy relative pose measurements[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2011:364-369.
[22] Carlone L. A convergence analysis for pose graph optimization via Gauss-Newton methods[C]//IEEE International Conference on Robotics and Automation. Piscataway, NJ:IEEE, 2013:965-972.
[23] Cadena C, Carlone L, Carrillo H, et al. Simultaneous localization and mapping:Present, future, and the robust-perception age[J]. IEEE Transactions on Robotics, 2017, 32(6):1309-1332.
Outlines

/