sensors-logo

Journal Browser

Journal Browser

3D Reconstruction with RGB-D Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (15 July 2021) | Viewed by 9675

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina-Charlotte, Charlotte, NC 28223-0001, USA
Interests: computer vision; pattern recognition; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

RGB-D sensors provide dense real-time measurements of 3D surfaces as a 4-channel signal. RGB color channels characterize surface appearance and a fourth depth channel provides local surface geometric measurements. Since its introduction a decade ago, RGB-D sensing hardware has been and continues to be an integral component of leading mapping and 3D reconstruction technologies. This Special Issue seeks submissions that demonstrate the current state-of-the-art in RGB-D-based 3D reconstruction and mapping algorithms. Examples of topics of interest are submissions that detail theory and applications for 3D reconstruction. This includes robotic mapping applications (visual odometry, RGBD-SLAM), 3D scanning applications, reverse engineering applications, single and multi-camera RGB-D capture, and calibration methods and 3D segmentation approaches.

Dr. Andrew R. Willis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • RGB-D mapping
  • RGB-D 3D reconstruction
  • RGB-D point cloud mapping
  • multi-sensor RGB-D shape capture
  • RGB-D SLAM
  • RGB-D calibration
  • RGB-D data segmentation

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 15340 KiB  
Article
Dynamic Point Cloud Compression Based on Projections, Surface Reconstruction and Video Compression
by Emil Dumic, Anamaria Bjelopera and Andreas Nüchter
Sensors 2022, 22(1), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010197 - 28 Dec 2021
Cited by 7 | Viewed by 3559
Abstract
In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi [...] Read more.
In this paper we will present a new dynamic point cloud compression based on different projection types and bit depth, combined with the surface reconstruction algorithm and video compression for obtained geometry and texture maps. Texture maps have been compressed after creating Voronoi diagrams. Used video compression is specific for geometry (FFV1) and texture (H.265/HEVC). Decompressed point clouds are reconstructed using a Poisson surface reconstruction algorithm. Comparison with the original point clouds was performed using point-to-point and point-to-plane measures. Comprehensive experiments show better performance for some projection maps: cylindrical, Miller and Mercator projections. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Sensors)
Show Figures

Figure 1

24 pages, 2391 KiB  
Article
Low-Bandwidth and Compute-Bound RGB-D Planar Semantic SLAM
by Jincheng Zhang, Prashant Ganesh, Kyle Volle, Andrew Willis and Kevin Brink
Sensors 2021, 21(16), 5400; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165400 - 10 Aug 2021
Cited by 2 | Viewed by 2463
Abstract
Visual simultaneous location and mapping (SLAM) using RGB-D cameras has been a necessary capability for intelligent mobile robots. However, when using point-cloud map representations as most RGB-D SLAM systems do, limitations in onboard compute resources, and especially communication bandwidth can significantly limit the [...] Read more.
Visual simultaneous location and mapping (SLAM) using RGB-D cameras has been a necessary capability for intelligent mobile robots. However, when using point-cloud map representations as most RGB-D SLAM systems do, limitations in onboard compute resources, and especially communication bandwidth can significantly limit the quantity of data processed and shared. This article proposes techniques that help address these challenges by mapping point clouds to parametric models in order to reduce computation and bandwidth load on agents. This contribution is coupled with a convolutional neural network (CNN) that extracts semantic information. Semantics provide guidance in object modeling which can reduce the geometric complexity of the environment. Pairing a parametric model with a semantic label allows agents to share the knowledge of the world with much less complexity, opening a door for multi-agent systems to perform complex tasking, and human–robot cooperation. This article takes the first step towards a generalized parametric model by limiting the geometric primitives to a planar surface and providing semantic labels when appropriate. Two novel compression algorithms for depth data and a method to independently fit planes to RGB-D data are provided, so that plane data can be used for real-time odometry estimation and mapping. Additionally, we extend maps with semantic information predicted from sparse geometries (planes) by a CNN. In experiments, the advantages of our approach in terms of computational and bandwidth resources savings are demonstrated and compared with other state-of-the-art SLAM systems. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Sensors)
Show Figures

Figure 1

16 pages, 9535 KiB  
Article
HUMANNET—A Two-Tiered Deep Neural Network Architecture for Self-Occluding Humanoid Pose Reconstruction
by Audrius Kulikajevas, Rytis Maskeliunas, Robertas Damasevicius and Rafal Scherer
Sensors 2021, 21(12), 3945; https://0-doi-org.brum.beds.ac.uk/10.3390/s21123945 - 8 Jun 2021
Cited by 11 | Viewed by 2079
Abstract
Majority of current research focuses on a single static object reconstruction from a given pointcloud. However, the existing approaches are not applicable to real world applications such as dynamic and morphing scene reconstruction. To solve this, we propose a novel two-tiered deep neural [...] Read more.
Majority of current research focuses on a single static object reconstruction from a given pointcloud. However, the existing approaches are not applicable to real world applications such as dynamic and morphing scene reconstruction. To solve this, we propose a novel two-tiered deep neural network architecture, which is capable of reconstructing self-obstructed human-like morphing shapes from a depth frame in conjunction with cameras intrinsic parameters. The tests were performed using on custom dataset generated using a combination of AMASS and MoVi datasets. The proposed network achieved Jaccards’ Index of 0.7907 for the first tier, which is used to extract region of interest from the point cloud. The second tier of the network has achieved Earth Mover’s distance of 0.0256 and Chamfer distance of 0.276, indicating good experimental results. Further, subjective reconstruction results inspection shows strong predictive capabilities of the network, with the solution being able to reconstruct limb positions from very few object details. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Sensors)
Show Figures

Figure 1

Back to TopTop