• Title/Summary/Keyword: 다시점 거리영상

Search Result 42, Processing Time 0.025 seconds

Efficient Compression Technique of Multi-view Image with Color and Depth Information by Layered Depth Image Representation (계층적 깊이 영상 표현에 의한 컬러와 깊이 정보를 포함하는 다시점 영상에 대한 효율적인 압축기술)

  • Lim, Joong-Hee;Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.2C
    • /
    • pp.186-193
    • /
    • 2009
  • Multi-view video is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method by presentation of efficient layered depth image using real distance comparison, solution of overlap problem, and YCrCb color transformation. In experimental results, confirmed high compression performance and good reconstructed image.

Face and Eye Detection for Interactive 3D Devices (인터랙티브 3D 방송단말을 위한 얼굴 및 눈인식 알고리즘의 검출 방법)

  • Song, Hyok;Lee, Chul-Dong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.280-281
    • /
    • 2011
  • 단말기 기술의 향상으로 과거 다시점 영상 디스플레이가 어려웠지만 현재는 다양한 3D 디스플레이 장치가 개발되고 있으며 2장의 영상을 활용한 스테레오 영상 단말 뿐 아니라 다시점 영상 단말도 개발되고 있다. 다시점 방송 및 콘텐츠를 이용하기 위한 장치는 사용자가 콘텐츠를 감상하는 거리, 각도 및 개개인의 취향에 따라 각기 다른 실감정도를 보여주고 있으며 단말 장치 및 사용자는 끊임없이 움직이게 되므로 이에 대한 대처가 필요하다. 본 논문에서는 사용자의 위치를 파악한 후 그 결과에 따라서 단말장치에서 연속적으로 사용자에게 적절한 깊이 정보를 송출하기 위한 알고리즘을 개발하였다. 사용자의 얼굴 및 눈을 검출하였으며 기존 알고리즘의 문제점인 눈이 아닌 눈썹 또는 눈 주변의 어두운 영역으로 인한 오인식의 문제점을 해결하였다. 눈썹의 위치를 인식하여 눈썹 영역과 눈 영역의 분리를 통한 정확한 눈 위치추적 알고리즘 결과 테스트스트림에 따라 최대 52%의 오차율 향상을 보였다.

  • PDF

Temporal Prediction Structure for Multi-view Video Coding (다시점 비디오 부호화를 위한 시간적 예측 구조)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1093-1101
    • /
    • 2012
  • Multi-view video is obtained by capturing one three-dimensional scene with many cameras at different positions. Multi-view video coding exploits inter-view correlations among pictures of neighboring views and temporal correlations among pictures of the same view. Multi-view video coding which uses many cameras requires a method to reduce the computational complexity. In this paper, we proposed an efficient prediction structure to improve performance of multi-view video coding. The proposed prediction structure exploits an average distance between the current picture and its reference pictures. The proposed prediction structure divides every GOP into several small groups to decide the maximum index of hierarchical B layer and the number of pictures of each B layer. Experimental results show that the proposed prediction structure shows good performance in image quality and bit-rates. When compared to the performance of hierarchical B pictures of Fraunhofer-HHI, the proposed prediction structure achieved 0.07~0.13 (dB) of PSNR gain and was down by 6.5(Kbps) in bitrate.

3차원 이미징 미래기술 전망 및 응용 분야

  • Baek, Jun-Gi;Choe, Jong-Su;Go, Seong-Je;Ho, Yo-Seong;Jang, U-Seok;Lee, Chun-Sik;Lee, Ju-Han;Lee, Seung-Gu;Ha, Jae-Seok
    • Broadcasting and Media Magazine
    • /
    • v.15 no.2
    • /
    • pp.101-110
    • /
    • 2010
  • 본 논문은 거리 및 밝기 정보의 동시 획득과 처리, 다시점 기하학적 해석(multi-view geometry analysis), 다시점 영상 모델 및 압축과 같은 3차원(three-dimensional; 3D) 이미징 핵심 기술을 기반으로; (i) 3D 영상 콘텐츠(photographic image)를 제작하는데 필요한 3D 카메라 입력장치와 교정(calibration)/합성/렌더링, (ii) 나노스케일 및 고감도 핵영상 분석에 필요한 영상분할, 객체 모델링 및 해석, 그리고 (iii) 3D 시뮬레이션 및 가시화 기술의 전망 및 응용분야를 소개한다.

Analysis method of signal model for synthetic aperture integral imaging (합성 촬영 집적 영상의 신호 모델 해석 방법)

  • Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.11
    • /
    • pp.2563-2568
    • /
    • 2010
  • SAII (synthetic aperture integral imaging) is a useful technique to record many multi view images of 3D objects by using a moving camera and to reconstruct 3D depth images from the recorded multiviews. This is largely composed of two processes. A pickup process provides elemental images of 3D objects and a reconstruction process generates 3D depth images computationally. In this paper, a signal model for SAII is presented. We defined the granular noise and analyzed its characteristics. Our signal model revealed that we could reduce the noise in the reconstructed images and increase the computational speed by reducing the shifting distance of a single camera.

A GPU based Rendering Method for Multiple-view Autostereoscopic Display (무안경식 다시점 입체 디스플레이를 위한 GPU기반 렌더링 기법)

  • Ahn, Jong-Gil;Kim, Jin-Wook
    • Journal of the HCI Society of Korea
    • /
    • v.4 no.2
    • /
    • pp.9-16
    • /
    • 2009
  • 3D stereo display systems gain more interests recently. Multiple-view autostereoscopic display system enables observers to watch stereo image from multiple viewpoints not wearing specific devices such as shutter glasses or HMD. Therefore, the Multiple-view autostereoscopic display is being spotlighted in the field of virtual reality, mobile, 3D TV and so on. However, one of the critical disadvantages of the system is that observer can enjoy the system only in a small designated area where the system is designed to work properly. This research provides an effective way of GPU based rendering technique to present seamless 3D stereo experiences from an arbitrary observer's view position.

  • PDF

Illumination Compensation Algorithm based on Segmentation with Depth Information for Multi-view Image (깊이 정보를 이용한 영역분할 기반의 다시점 영상 조명보상 기법)

  • Kang, Keunho;Ko, Min Soo;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.935-944
    • /
    • 2013
  • In this paper, a new illumination compensation algorithm by segmentation with depth information is proposed to improve the coding efficiency of multi-view images. In the proposed algorithm, a reference image is first segmented into several layers where each layer is composed of objects with a similar depth value. Then we separate objects from each other even in the same layer by labeling each separate region in the layered image. Then, the labeled reference depth image is converted to the position of the distortion image view by using 3D warping algorithm. Finally, we apply an illumination compensation algorithm to each of matched regions in the converted reference view and distorted view. The occlusion regions that occur by 3D warping are also compensated by a global compensation method. Through experimental results, we are able to confirm that the proposed algorithm has better performance to improve coding efficiency.

Joint Segmentation of Multi-View Images by Region Correspondence (영역 대응을 이용한 다시점 영상 집합의 통합 영역화)

  • Lee, Soo-Chahn;Kwon, Dong-Jin;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.685-695
    • /
    • 2008
  • This paper presents a method to segment the object of interest from a set of multi-view images with minimal user interaction. Specifically, after the user segments an initial image, we first estimate the transformations between foreground and background of the segmented image and the neighboring image, respectively. From these transformations, we obtain regions in the neighboring image that respectively correspond to the foreground and the background of the segmented image. We are then able to segment the neighboring image based on these regions, and iterate this process to segment the whole image set. Transformation of foregrounds are estimated by feature-based registration with free-form deformation, while transformation of backgrounds are estimated by homography constrained to affine transformation. Here, both are based on correspondence point pairs. Segmentation is done by estimating pixel color distributions and defining a shape prior based on the obtained foreground and background regions and applying them to a Markov random field (MRF) energy minimization framework for image segmentation. Experimental results demonstrate the effectiveness of the proposed method.

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.