• 제목/요약/키워드: 3D Depth Camera

검색결과 301건 처리시간 0.029초

복합형 카메라 시스템에서 관심영역이 향상된 고해상도 깊이맵 생성 방법 (Generation of ROI Enhanced High-resolution Depth Maps in Hybrid Camera System)

  • 김성열;호요성
    • 방송공학회논문지
    • /
    • 제13권5호
    • /
    • pp.596-601
    • /
    • 2008
  • 본 논문은 저해상도의 깊이 카메라와 고해상도의 양안식 카메라를 결합한 복합형 카메라 시스템에서 관심영역(region of interest, ROI)이 향상된 깊이맵을 생성하는 새로운 방법을 제안한다. 제안하는 방법은 깊이 카메라로 획득한 깊이 정보를 3차원 워핑(warping)하여 좌영상의 ROI 깊이맵을 생성한다. 그런 다음, 양안식 카메라로 획득한 좌우영상의 배경 영역을 스테레오 정합하여 좌영상의 배경 깊이맵을 생성한다. 최종적으로, ROI 깊이맵과 배경 깊이맵을 결합하여 최종 깊이맵을 생성한다. 제안하는 방법으로 생성한 고해상도 깊이맵은 기존의 스테레오 정합 방법보다 ROI에 정확한 깊이 정보를 제공한다.

단안영상에서 움직임 벡터를 이용한 영역의 깊이추정 (A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence)

  • 손정만;박영민;윤영우
    • 융합신호처리학회논문지
    • /
    • 제5권2호
    • /
    • pp.96-105
    • /
    • 2004
  • 2차원 이미지로부터 3차원 이미지 복원은 각 픽셀까지의 깊이 정보가 필요하고, 3차원 모델의 복원에 관한 일반적인 수작업은 많은 시간과 비용이 소모된다. 본 논문의 목표는 카메라가 이동하는 중에, 획득된 단안 영상에서 영역의 상대적인 깊이 정보를 추출하는 것이다. 카메라 이동에 의한 영상의 모든 점들의 움직임은 깊이 정보에 종속적이라는 사실에 기반을 두고 있다. 전역 탐색 기법을 사용하여 획득한 움직임 벡터에서 카메라 회전과 배율에 관해서 보상을 한다. 움직임 벡터를 분석하여 평균 깊이를 측정하고, 평균 깊이에 대한 각 영역의 상대적 깊이를 구하였다. 실험결과 영역의 상대적인 깊이는 인간이 인식하는 상대적인 깊이와 일치한다는 것을 보였다.

  • PDF

Three-Dimensional Visualization Technique of Occluded Objects Using Integral Imaging with Plenoptic Camera

  • Lee, Min-Chul;Inoue, Kotaro;Tashiro, Masaharu;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • 제15권3호
    • /
    • pp.193-198
    • /
    • 2017
  • In this study, we propose a three-dimensional (3D) visualization technique of occluded objects using integral imaging with a plenoptic camera. In previous studies, depth map estimation from elemental images was used to remove occlusion. However, the resolution of these depth maps is low. Thus, the occlusion removal accuracy is not efficient. Therefore, we use a plenoptic camera to obtain a high-resolution depth map. Hence, individual depth map for each elemental image can also be generated. Finally, we can regenerate a more accurate depth map for 3D objects with these separate depth maps, allowing us to remove the occlusion layers more efficiently. We perform optical experiments to prove our proposed technique. Moreover, we use MSE and PSNR as a performance metric to evaluate the quality of the reconstructed image. In conclusion, we enhance the visual quality of the reconstructed image after removing the occlusion layers using the plenoptic camera.

Quality Enhancement of 3D Volumetric Contents Based on 6DoF for 5G Telepresence Service

  • Byung-Seo Park;Woosuk Kim;Jin-Kyum Kim;Dong-Wook Kim;Young-Ho Seo
    • Journal of Web Engineering
    • /
    • 제21권3호
    • /
    • pp.729-750
    • /
    • 2022
  • In general, the importance of 6DoF (degree of freedom) 3D (dimension) volumetric contents technology is emerging in 5G (generation) telepresence service, Web-based (WebGL) graphics, computer vision, robotics, and next-generation augmented reality. Since it is possible to acquire RGB images and depth images in real-time through depth sensors that use various depth acquisition methods such as time of flight (ToF) and lidar, many changes have been made in object detection, tracking, and recognition research. In this paper, we propose a method to improve the quality of 3D models for 5G telepresence by processing images acquired through depth and RGB cameras on a multi-view camera system. In this paper, the quality is improved in two major ways. The first concerns the shape of the 3D model. A method of removing noise outside the object by applying a mask obtained from a color image and a combined filtering operation to obtain the difference in depth information between pixels inside the object were proposed. Second, we propose an illumination compensation method for images acquired through a multi-view camera system for photo-realistic 3D model generation. It is assumed that the three-dimensional volumetric shooting is done indoors, and the location and intensity of illumination according to time are constant. Since the multi-view camera uses a total of 8 pairs and converges toward the center of space, the intensity and angle of light incident on each camera are different even if the illumination is constant. Therefore, all cameras take a color correction chart and use a color optimization function to obtain a color conversion matrix that defines the relationship between the eight acquired images. Using this, the image input from all cameras is corrected based on the color correction chart. It was confirmed that the quality of the 3D model could be improved by effectively removing noise due to the proposed method when acquiring images of a 3D volumetric object using eight cameras. It has been experimentally proven that the color difference between images is reduced.

3차원 영상을 위한 다초점 방식 영상획득장치 (Multi-Focusing Image Capture System for 3D Stereo Image)

  • 함운철;권혁재;투멘자르갈 엔크바타르
    • 로봇학회논문지
    • /
    • 제6권2호
    • /
    • pp.118-129
    • /
    • 2011
  • In this paper, we suggest a new camera capturing and synthesizing algorithm with the multi-captured left and right images for the better comfortable feeling of 3D depth and also propose 3D image capturing hardware system based on the this new algorithm. We also suggest the simple control algorithm for the calibration of camera capture system with zooming function based on a performance index measure which is used as feedback information for the stabilization of focusing control problem. We also comment on the theoretical mapping theory concerning projection under the assumption that human is sitting 50cm in front of and watching the 3D LCD screen for the captured image based on the modeling of pinhole Camera. We choose 9 segmentations and propose the method to find optimal alignment and focusing based on the measure of alignment and sharpness and propose the synthesizing fusion with the optimized 9 segmentation images for the best 3D depth feeling.

단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘 (3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence)

  • 박준호;전대성;윤영우
    • 정보처리학회논문지B
    • /
    • 제8B권5호
    • /
    • pp.549-556
    • /
    • 2001
  • 2차원 영상으로 부터 3차원 영상으로 복원하는 일은 일반적으로 카메라의 초점에서 영상 프레임의 각 픽셀까지의 깊이 정보가 필요하고, 3차원 모델의 복원에 관한 일반적인 수작업은 많은 식나과 비용이 소모된다. 본 논문에서는 카메라의 움직임이 포함되어 있는 단안 영상 시퀸스로부터 3차원 영상 제작에 필요한 상대적인 깊이 정보를 실시간으로 추출하는 알고리즘을 제안하고, 하드웨어를 구현하기 위한여 알고리즘을 단순화하였다. 이 알고리즘은 카메라 이동에 의한 영상의 모든 점들의 움직임은 깊이 정보의 종속적이라는 사실에 기반을 두고 있다. 불록매칭 알고리즘에 기반을 둔 전역 움직임 탐색에 의한 움직임 벡터를 추출한 후, 카메라 회전과 확대/축소에 관한 카메라 움직임 보상을 실행하고 깉이 정보 추출 과정이 전개된다. 깊이 정보 추출 과정은 단안 영상에서 객체의 이동처리를 분석하여 움직임 벡터를 구하고 프레임내의 모든 픽셀에 대한 평균 깊이를 계산한 후, 평균 깊이에 대한 각 블록의 상대적 깊이를 산출하였다. 모의 실험 결과 전경과 배경에 속하는 영역의 깊이는 인간 시각 체계가 인식하는 상대적인 깊이와 일치한다는 것을 보였다.

  • PDF

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권1호
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Surface Rendering using Stereo Images

  • Lee, Sung-Jae;Lee, Jun-Young;Lee, Myoung-Ho;Kim, Jeong-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.181.5-181
    • /
    • 2001
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

스테레오 영상에서의 깊이정보를 이용한 3D 가상현실 구현 (Reconstruction of 3D Virtual Reality Using Depth Information of Stereo Image)

  • 이성재;김정훈;이정환;안종식;이동준;이명호
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.2950-2952
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) method and OpenGL. The final result image is helpful for the understanding of depth information visually.

  • PDF