• Title/Summary/Keyword: 3-D reconstruction

Search Result 1,155, Processing Time 0.031 seconds

AUTOMATIC IDENTIFICATION OF ROOF TYPES AND ROOF MODELING USING LIDAR

  • Kim, Heung-Sik;Chang, Hwi-Jeong;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.83-86
    • /
    • 2005
  • This paper presents a method for point-based 3D building reconstruction using LiDAR data and digital map. The proposed method consists of three processes: extraction of building roof points, identification of roof types, and 3D building reconstruction. After extracting points inside the polygon of building, the ground surface, wall and tree points among the extracted points are removed through the filtering process. The filtered points are then fitted into the flat plane using ODR(Orthogonal Distance Regression). If the fitting error is within the predefined threshold, the surface is classified as a flat roof. Otherwise, the surface is fitted and classified into a gable or arch roof through RMSE analysis. Based on the roof types identified in automated fashion, the 3D building reconstruction is performed. Experimental results showed that the proposed method classified successfully three different types of roof and that the fusion of LiDAR data and digital map could be a feasible method of modelling 3D building reconstruction.

  • PDF

Implementation of Photorealistic 3D Object Reconstruction Using Voxel Coloring (Voxel Coloring을 이용한 3D 오브젝트 모델링)

  • Adipranata, Rudy;Yang, Hwang-Kyu;Yun, Tae-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.527-530
    • /
    • 2003
  • In this paper, we implemented the voxel coloring method to reconstruct 3D object from synthetic input Images. Then compare the result between using standard voxel coloring and using coarse-to-fine method. We compared using different voxel space site to see the difference of time processing and the result of 3D object. Photorealistic 3D object reconstruction is a challenging problem in computer graphics. Vexel coloring considered the reconstruction problem as a color reconstruction problem, instead of shape reconstruction problem. This method works by discretizing scene space into yokels, then traversed and colored those in special order. Also there is an extension of voxel coloring method far decreasing the amount of processing time called coarse-to-fine method. This. method works using low resolution instead of high resolution as input and after processing finish, apply some kind of search strategy.

  • PDF

Reconstruction of a 3D Model using the Midpoints of Line Segments in a Single Image (한 장의 영상으로부터 선분의 중점 정보를 이용한 3차원 모델의 재구성)

  • Park Young Sup;Ryoo Seung Taek;Cho Sung Dong;Yoon Kyung Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.4
    • /
    • pp.168-176
    • /
    • 2005
  • We propose a method for 3-dimensionally reconstructing an object using a line that includes the midpoint information from a single image. A pre-defined polygon is used as the primitive and the recovery is processed from a single image. The 3D reconstruction is processed by mapping the correspondence point of the primitive model onto the photo. In the recent work, the reconstructions of camera parameters or error minimizing methods through iterations were used for model-based 3D reconstruction. However, we proposed a method for the 3D reconstruction of primitive that consists of the segments and the center points of the segments for the reconstruction process. This method enables the reconstruction of the primitive model to be processed using only the focal length of various camera parameters during the segment reconstruction process.

CALOS : Camera And Laser for Odometry Sensing (CALOS : 주행계 추정을 위한 카메라와 레이저 융합)

  • Bok, Yun-Su;Hwang, Young-Bae;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

Three-Dimensional Photon Counting Imaging with Enhanced Visual Quality

  • Lee, Jaehoon;Lee, Min-Chul;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.180-187
    • /
    • 2021
  • In this paper, we present a computational volumetric reconstruction method for three-dimensional (3D) photon counting imaging with enhanced visual quality when low-resolution elemental images are used under photon-starved conditions. In conventional photon counting imaging with low-resolution elemental images, it may be difficult to estimate the 3D scene correctly because of a lack of scene information. In addition, the reconstructed 3D images may be blurred because volumetric computational reconstruction has an averaging effect. In contrast, with our method, the pixels of the elemental image rearrangement technique and a Bayesian approach are used as the reconstruction and estimation methods, respectively. Therefore, our method can enhance the visual quality and estimation accuracy of the reconstructed 3D images because it does not have an averaging effect and uses prior information about the 3D scene. To validate our technique, we performed optical experiments and demonstrated the reconstruction results.

Comparative study of data selection in data integration for 3D building reconstruction

  • Nakagawa, Masafumi;Shibasaki, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1393-1395
    • /
    • 2003
  • In this research, we presented a data integration, which integrates ultra high resolution images and complementary data for 3D building reconstruction. In our method, as the ultra high resolution image, Three Line Sensor (TLS) images are used in combination with 2D digital maps, DSMs and both of them. Reconstructed 3D buildings, correctness rate and the accuracy of results were presented. As a result, optimized combination scheme of data sets , sensors and methods was proposed.

  • PDF

Comparison of 3D Reconstruction Image and Medical Photograph of Neck Tumors (경부 종물에서 3차원 재건 영상과 적출 조직 사진의 비교)

  • Yoo, Young-Sam
    • Korean Journal of Head & Neck Oncology
    • /
    • v.26 no.2
    • /
    • pp.198-203
    • /
    • 2010
  • Objectives : Getting full information from axial CT images needs experiences and knowledge. Sagittal and coronal images could give more information but we have to draw 3-dimensional images in mind with above informations. With aid of 3D reconstruction softwares, CT data are converted to visible 3D images. We tried to compare medical photographs of 15 surgical specimens from neck tumors with 3D reconstructed images of same patients. Material and Methods : Fifteen patients with neck tumors treated surgically were recruited. Medical photograph of the surgical specimens were collected for comparison. 3D reconstruction of neck CT from same patients with aid of 3D-doctor software gave 3D images of neck masses. Width and height of tumors of each photos and images from the same cases were calculated and compared statistically. Visual similarities were rated between photos and 3D images. Results : No statatistical difference were found in size between medical photos and 3D images. Visual similarity score were higher between 2 groups of images. Conclusion : 3D reconstructed images of neck mass looked alike the real photographs of excised neck mass with similar calculated sizes. It could give us reliable visual information about the mass.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Comparison of 3D Reconstruction Methods to Create 3D Indoor Models with Different LODs

  • Hong, Sungchul;Choi, Hyunsang
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.674-675
    • /
    • 2015
  • A 3D indoor model becomes an indiscernible component of BIM (Building Information Modeling) and GIS (Geographic Information System). However, a huge amount of time and human resources are inevitable for collecting spatial measurements and creating such a 3D indoor model. Also, a varied forms of 3D indoor models exist depending on their purpose of use. Thus, in this study, three different 3D indoor models are defined as 1) omnidirectional images, 2) a 3D realistic model, and 3) 3D indoor as-built model. A series of reconstruction methods is then introduced to construct each type of 3D indoor models: they are an omnidirectional image acquisition method, a hybrid surveying method, and a terrestrial LiDAR-based method. The reconstruction methods are applied to a large and complex atrium, and their 3D modeling results are compared and analyzed.

  • PDF