Mobile Point Fusion
Gal Kamar and Daniel Ben-Hoda
Supervised by Aaron Wetzler
The need for real-time 3D reconstruction is becoming more and more apparent in today's world. Depth Sensors are being marketed today in consumer laptops and tablets. In the near future we expect an increase in availability of mobile devices with depth sensors, and therefore also a need for highly efficient real-time 3D reconstruction methods. Our project's goal is to enable these devices to preform 3D reconstruction in real-time. Our solution uses the input from a moving depth sensor to estimate the camera position and build a 3D model. The implemetation harnesses the GPU to achieve real time preformace while taking into account the limitations of mobile devices and putting a strong emphasis on optimizations throughout the pipeline.
Please, see project report.
Please, see final presentation.
Please, see project papers.