• Title/Summary/Keyword: 3D Scene Reconstruction

Search Result 64, Processing Time 0.043 seconds

3D Reconstruction and Self-calibration based on Binocular Stereo Vision (스테레오 영상을 이용한 자기보정 및 3차원 형상 구현)

  • Hou, Rongrong;Jeong, Kyung-Seok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.9
    • /
    • pp.3856-3863
    • /
    • 2012
  • A 3D reconstruction technique from stereo images that requires minimal intervention from the user has been developed. The reconstruction problem consists of three steps of estimating specific geometry groups. The first step is estimating the epipolar geometry that exists between the stereo image pairs which includes feature matching in both images. The second is estimating the affine geometry, a process to find a special plane in the projective space by means of vanishing points. The third step, which includes camera self-calibration, is obtaining a metric geometry from which a 3D model of the scene could be obtained. The major advantage of this method is that the stereo images do not need to be calibrated for reconstruction. The results of camera calibration and reconstruction have shown the possibility of obtaining a 3D model directly from features in the images.

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF

Optical Encryption and Information Authentication of 3D Objects Considering Wireless Channel Characteristics

  • Lee, In-Ho;Cho, Myungjin
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.6
    • /
    • pp.494-499
    • /
    • 2013
  • In this paper, we present an optical encryption and information authentication of 3D objects considering wireless channel characteristics. Using the optical encryption such as double random phase encryption (DRPE) and 3D integral imaging, a 3D scene with encryption can be transmitted. However, the wireless channel causes the noise and fading effects of the 3D transmitted encryption data. When the 3D encrypted data is transmitted via wireless channel, the information may be lost or distorted because there are a lot of factors such as channel noise, propagation fading, and so on. Thus, using digital modulation and maximum likelihood (ML) detection, the noise and fading effects are mitigated, and the encrypted data is estimated well at the receiver. In addition, using computational volumetric reconstruction of integral imaging and advanced correlation filters, the noise effects may be remedied and 3D information may be authenticated. To prove our method, we carry out an optical experiment for sensing 3D information and simulation for optical encryption with DRPE and authentication with a nonlinear correlation filter. To the best of our knowledge, this is the first report on optical encryption and information authentication of 3D objects considering the wireless channel characteristics.

3D reconstruction method without projective distortion from un-calibrated images (비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법)

  • Kim, Hyung-Ryul;Kim, Ho-Cul;Oh, Jang-Suk;Ku, Ja-Min;Kim, Min-Gi
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

Geometric Regualrization of Irregular Building Polygons: A Comparative Study

  • Sohn, Gun-Ho;Jwa, Yoon-Seok;Tao, Vincent;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.545-555
    • /
    • 2007
  • 3D buildings are the most prominent feature comprising urban scene. A few of mega-cities in the globe are virtually reconstructed in photo-realistic 3D models, which becomes accessible by the public through the state-of-the-art online mapping services. A lot of research efforts have been made to develop automatic reconstruction technique of large-scale 3D building models from remotely sensed data. However, existing methods still produce irregular building polygons due to errors induced partly by uncalibrated sensor system, scene complexity and partly inappropriate sensor resolution to observed object scales. Thus, a geometric regularization technique is urgently required to rectify such irregular building polygons that are quickly captured from low sensory data. This paper aims to develop a new method for regularizing noise building outlines extracted from airborne LiDAR data, and to evaluate its performance in comparison with existing methods. These include Douglas-Peucker's polyline simplication, total least-squared adjustment, model hypothesis-verification, and rule-based rectification. Based on Minimum Description Length (MDL) principal, a new objective function, Geometric Minimum Description Length (GMDL), to regularize geometric noises is introduced to enhance the repetition of identical line directionality, regular angle transition and to minimize the number of vertices used. After generating hypothetical regularized models, a global optimum of the geometric regularity is achieved by verifying the entire solution space. A comparative evaluation of the proposed geometric regulator is conducted using both simulated and real building vectors with various levels of noise. The results show that the GMDL outperforms the selected existing algorithms at the most of noise levels.

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens

  • Kim, Nam-Woo;Hong, Seok-Min;Lee, Hoon Jae;Lee, Byung-Gook;Lee, Joon-Jae
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.517-522
    • /
    • 2014
  • In this paper we propose an axially distributed image-sensing method with a wide-angle lens to capture the wide-area scene of 3D objects. A lot of parallax information can be collected by translating the wide-angle camera along the optical axis. The recorded wide-area elemental images are calibrated using compensation of radial distortion. With these images we generate volumetric slice images using a computational reconstruction algorithm based on ray back-projection. To show the feasibility of the proposed method, we performed optical experiments for visualization of a partially occluded 3D object.

TEST OF A LOW COST VEHICLE-BORNE 360 DEGREE PANORAMA IMAGE SYSTEM

  • Kim, Moon-Gie;Sung, Jung-Gon
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.137-140
    • /
    • 2008
  • Recently many areas require wide field of view images. Such as surveillance, virtual reality, navigation and 3D scene reconstruction. Conventional camera systems have a limited filed of view and provide partial information about the scene. however, omni directional vision system can overcome these disadvantages. Acquiring 360 degree panorama images requires expensive omni camera lens. In this study, 360 degree panorama image was tested using a low cost optical reflector which captures 360 degree panoramic views with single shot. This 360 degree panorama image system can be used with detailed positional information from GPS/INS. Through this study result, we show 360 degree panorama image is very effective tool for mobile monitoring system.

  • PDF