• Title/Summary/Keyword: Multi-view Stereo

Search Result 62, Processing Time 0.021 seconds

Fusing Algorithm for Dense Point Cloud in Multi-view Stereo (Multi-view Stereo에서 Dense Point Cloud를 위한 Fusing 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.798-807
    • /
    • 2020
  • As technologies using digital camera have been developed, 3D images can be constructed from the pictures captured by using multiple cameras. The 3D image data is represented in a form of point cloud which consists of 3D coordinate of the data and the related attributes. Various techniques have been proposed to construct the point cloud data. Among them, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) are examples of the image-based technologies in this field. Based on the conventional research, the point cloud data generated from SfM and MVS may be sparse because the depth information may be incorrect and some data have been removed. In this paper, we propose an efficient algorithm to enhance the point cloud so that the density of the generated point cloud increases. Simulation results show that the proposed algorithm outperforms the conventional algorithms objectively and subjectively.

A Survey for 3D Object Detection Algorithms from Images

  • Lee, Han-Lim;Kim, Ye-ji;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.9 no.3
    • /
    • pp.183-190
    • /
    • 2022
  • Image-based 3D object detection is one of the important and difficult problems in autonomous driving and robotics, and aims to find and represent the location, dimension and orientation of the object of interest. It generates three dimensional (3D) bounding boxes with only 2D images obtained from cameras, so there is no need for devices that provide accurate depth information such as LiDAR or Radar. Image-based methods can be divided into three main categories: monocular, stereo, and multi-view 3D object detection. In this paper, we investigate the recent state-of-the-art models of the above three categories. In the multi-view 3D object detection, which appeared together with the release of the new benchmark datasets, NuScenes and Waymo, we discuss the differences from the existing monocular and stereo methods. Also, we analyze their performance and discuss the advantages and disadvantages of them. Finally, we conclude the remaining challenges and a future direction in this field.

VHOE-based Multi-view Stereoscopic 3D Display System (VHOE광학판을 이용한 다시점 스테레오 입체영상 디스플레이 시스템)

  • Cho, Byung-Chul;Koo, Jung-Sik;Kim, Seung-Cheol;Kim, Eun-Soo
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2002.04b
    • /
    • pp.42-44
    • /
    • 2002
  • The experimental model of the 8-view stereoscopic display system using the photopolymer-based VHOE is proposed. At first, the VHOE is implemented by angle-multiplexed recording of 8-view's diffraction gratings using the optimized exposure-time scheduling scheme in the photopolymer (HRF-150-100) and then. the VHOE-based 8-view stereoscopic display system is implemented by sequentially synchronizing the incident angles of the reference beam of the VHOE with the 8-view stereo images on the LCD pannel. Accordingly, from some experimental results using 8-view images generated by the toed-in stereo camera system, it is found that 8-view stereo images can be diffracted to eight different directions time-sequentially and there is some disparity between the stereo images.

  • PDF

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Research on Robustness of 2D DWT-Based Watermarking in Intermediate Viewpoint by 3D Warping

  • Park, Scott;Choi, Hyun-Jun;Yang, Won-Jae;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.3
    • /
    • pp.173-180
    • /
    • 2014
  • This paper investigates the robustness of watermarking techniques for stereo or multi-view images generated from texture and depth images. A three-dimensional (3D) warping technique is applied to texture and depth images to generate stereo or multi-view images for a 3D display. By using the 3D warping technique, in this paper, we developed watermarking techniques and evaluated the robustness of these techniques that can extract watermarks from texture images even when the viewpoints are moved. A depth image is used to generate a stereo image with the largest viewpoint difference to the left and right. The overlapping region in the stereo image that does not disappear after warping is then obtained, and DWT is applied to this region to embed a watermark in the LL sub-band. The proposed watermarking techniques were found to yield bit error rates of about 3%-16% when they were applied to stereo images generated from texture and depth images. Furthermore, the results showed that the copyright could be seen when the extracted watermark was visually confirmed.

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

A New Rectification Scheme for Uncalibrated Stereo Image Pairs and Its Application to Intermediate View Reconstruction

  • Ko, Jung-Hwan;Jung, Yong-Woo;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.6 no.4
    • /
    • pp.26-34
    • /
    • 2005
  • In this paper, a new rectification scheme to transform the uncalibrated stereo image pair into the calibrated one is suggested and its performance is analyzed by applying this scheme to the reconstruction of the intermediate views for multi-view stereoscopic display. In the proposed method, feature points are extracted from the stereo image pair by detecting the comers and similarities between each pixel of the stereo image pair. These detected feature points, are then used to extract moving vectors between the stereo image pair and the epipolar line. Finally, the input stereo image pair is rectified by matching the extracted epipolar line between the stereo image pair in the horizontal direction. Based on some experiments done on the synthesis of the intermediate views by using the calibrated stereo image pairs through the proposed rectification algorithm and the uncalibrated ones for three kinds of stereo image pairs; 'Man', 'Face' and 'Car', it is found that PSNRs of the intermediate views reconstructed from the calibrated images improved by about 2.5${\sim}$3.26 dB than those of the uncalibrated ones.

Comparison of geometrical methods to identify CME 3-D structures

  • Lee, Harim;Moon, Yong-Jae;Na, Hyeonock;Jang, Soojeong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.73-73
    • /
    • 2014
  • Several geometrical models (e.g., cone and flux rope models) have been suggested to infer 3-D parameters of CMEs using multi-view observations (STEREO/SECCHI) and single-view observations (SOHO/LASCO). To prepare for when only single view observations are available, we have made a test whether the cone model parameters from single-view observations are consistent with those from multi-view ones. For this test, we select 35 CMEs which are identified as CMEs, whose angular widths are larger than 180 degrees, by one spacecraft and as limb CMEs by the other ones. For this we use SOHO/LASCO and STEREO/SECCHI data during the period from 2010 December to 2011 July when two spacecraft were separated by $90{\pm}10$ degrees. In this study, we compare 3-D parameters of these CMEs from three different methods: (1) a triangulation method using the STEREO/SECCHI and SOHO/LASCO data, (2) a Graduated Cylindrical Shell (GCS) flux rope model using the STEREO/SECCHI data, and (3) an ice cream cone model using the SOHO/LASCO data. The parameters used for comparison are radial velocities, angular widths and source location (angle ${\gamma}$ between the propagation direction and the plan of the sky). We find that the radial velocities and the ${\gamma}$-values from three methods are well correlated with one another (CC > 0.8). However, angular widths from the three methods are somewhat different. The correlation coefficients are relatively not good (CC > 0.4). We also find that the correlation coefficients between the locations from the three methods and the active region locations are larger than 0.9, implying that most of the CMEs are radially ejected.

  • PDF

IVS using disparity estimation based on bidirectional quadtree (양방향 사진트리 기반 변이 추정을 이용한 중간 시점 영상 합성)

  • 김재환;임정은;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2295-2298
    • /
    • 2003
  • The correspondence problem for stereo image matching plays an important role in expanding view points as multi view video applications become more popular. The conventional disparity estimation algorithms have limitation to find exact disparities because they consider not image features but similiar intensity points. Thus we propose an efficient disparity estimation algorithm considering features of stereo image pairs. As simulation results, our proposed method confirms better intermediate views than the existing block-matching methods.

  • PDF