• Title/Summary/Keyword: 깊이 영상 보정

Search Result 87, Processing Time 0.023 seconds

A Comparison of guided image filtering algorithms based disparity enhancement for view interpolation (영상 보간을 위한 유도 영상 필터링 기반의 변이 보정 기법의 성능 비교)

  • Shin, Hong-Chang;Lee, Gwang-Soon;Hur, Namho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.435-438
    • /
    • 2015
  • 본 논문에서는 영상 보간의 결과 측면에서 깊이를 보정하는 방법 중 하나인 유도 영상 필터링 기법을 비교한다. 실험을 위해 초기 깊이 영상을 두 종류의 유도 영상 필터링 기법으로 개선을 하였다. 초기 깊이 영상과, 각각의 필터링 기법에 의해 개선된 변이 영상을 이용하여 영상 보간을 하였고, 그 결과를 비교하였다. 결과로서 한 시점 영상의 텍스처 정보만을 이용하여 변이를 개선하는 유도 영상 필터링 기법으로 변이 영상을 개선하게 되는 경우에 육안으로는 구분이 갈 정도로 변이가 개선이 되지만, 영상 보간의 측면에서 보았을 때는 크게 차이가 없거나 오히려 품질이 저하되는 경우를 확인할 수 있었다.

  • PDF

Study of perception of the visual depth caused by the color correction (입체영상 제작에서 색 보정 결과가 입체감 인지에 미치는 영향 연구)

  • Han, Myung-Hee;Kim, Chee-Yong
    • Journal of Digital Contents Society
    • /
    • v.11 no.2
    • /
    • pp.177-184
    • /
    • 2010
  • These days, as digital producing technique has been developed, 3D imaging technique is used in high-tech computer and T.V. Also study for 3D producing technique is actively in progress. Moreover, as James Cameron's movie, 'Avatar' released in 2009 was a box office hit, the issue about 3D image came to the fore again. At this point, I decided to study the effect of the visual depth caused by the color correction during the post-production stage. The purpose of this study is to offer information about processing effective images through data about the effect of the visual depth that applies the color correction during the post-production stage. Basically, I supposed that color and contract would have effects on depth of 3D image. As a result, I could find out the changes of visual depth, space perception and sense of depth throughout the experiment. Applying this result,, I produced the 15 minutes of 3D advertisement movie and I found out that the color correction during the post-production stage was very effective for 3D depth. The left image and the right image by beam splitter based rig and parallel rig were used for this study. Also I adjusted the strong contrast by the color correction during the post-production stage after correcting convergence and visual depth during editing. As a result, I could produce images which had strong sense of space and sense of depth.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

implementation of 3D Reconstruction using Multiple Kinect Cameras (다수의 Kinect 카메라를 이용한 3차원 객체 복원 구현)

  • Shin, Dong Won;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.22-27
    • /
    • 2014
  • Three-dimensional image reconstruction allows us to represent real objects in the virtual space and observe the objects at arbitrary view points. This technique can be used in various application areas such as education, culture, and art. In this paper, we propose an implementation method of the high-quality three-dimensional object using multiple Kinect cameras released from Microsoft. First, We acquire color and depth images from triple Kinect cameras; Kinect cameras are placed in front of the object as a convergence form. Because original depth image includes some areas where have no depth values, we employ joint bilateral filter to refine these areas. In addition to the depth image problem, there is an color mismatch problem in color images of multiview system. In order to solve it, we exploit an color correction method using three-dimensional geometry. Through the experimental results, we found that three-dimensional object which is used the proposed method is more naturally represented than the original three-dimensional object in terms of the color and shape.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Intermediate Depth Image Generation using Disparity Increment of Stereo Depth Images (스테레오 깊이영상의 변위증분을 이용한 중간시점 깊이영상 생성)

  • Koo, Ja-Myung;Seo, Young-Ho;Choi, Hyun-Jun;Yoo, Ji-Sang;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.363-373
    • /
    • 2012
  • This paper proposes a method to generate a depth image at an arbitrary intermediate view-point, which is targeting a video service for free-view, auto-stereoscopy, holography, etc. It assumes that the leftmost and the rightmost depth images are given and they both have been camera-calibrated and image-rectified. This method calculates and uses a disparity increment per depth value. In this paper, it is obtained by stereo matching for the given two depth image by considering more general cases. The disparity increment is used to find the location in the intermediate view-point depth image (IVPD) for each depth in the given images. Thus, this paper finds two IVPDs, from left image and from right image. Noises are removed and holes are filled in each IVPDs and the two results are combined to get the final IVPD. The proposed method was implemented and applied to several test sequences. The results revealed that the quality of the generated IVPD corresponds to 33.84dB of PSNR in average and it takes about 1 second to generate a HD IVPD. We evaluate that this image quality is quite good by considering the low correspondency among the left images, intermediate images, and the right images in the test sequences. If the execution speed is improved, the proposed method can be a very useful method to generate an IVPD at an arbitrary view-point, we believe.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

Pig Detection using Depth Information under Heating Lamp Environments (보온등 환경에서 깊이 정보를 이용한 돼지 탐지)

  • Choi, Younchang;Sa, Jaewon;Chung, Yongwha;Park, Daihee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.693-695
    • /
    • 2016
  • 축산 농가에서 돈사의 효율적인 관리를 위해 카메라를 이용한 자동 모니터링 기법이 중요한 이슈로 떠오르고 있다. 그러나 컬러 영상에서 돈사의 보온등 조명에 직접 노출된 돼지들이 노출 과다 현상에 의해 탐지되지 않는 문제가 발생한다. 본 논문에서는 컬러 영상에서 돼지가 탐지되지 않는 문제를 해결하기 위해 Kinect 2 카메라로부터 획득한 깊이 영상을 이용하여 돼지를 탐지하는 방법을 제안한다. 즉, 깊이 영상을 이용하여 깊이 정보 값을 보정한 후 바닥과 돼지의 깊이 정보 값의 차이를 통해 돼지들의 영역을 탐지한다. 실험 결과, 깊이 영상을 이용하여 보온등 조명에 과다 노출된 돼지의 영역을 탐지하고 히스토그램 평활화를 적용함으로써, 컬러 영상에서 돼지들이 탐지되지 않는 문제를 해결하였다.

System Implementation for Generating Virtual View Digital Holographic using Vertical Rig (수직 리그를 이용한 임의시점 디지털 홀로그래픽 생성 시스템 구현)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.46-49
    • /
    • 2012
  • 본 논문에서는 3차원 입체 비디오처리 기술의 최종목표인 디지털 홀로그램을 생성하는데 필요한 객체의 좌표와 색상정보가 들어있는 같은 시점과 해상도인 RGB 영상과 깊이 영상을 획득하여 가상 시점의 디지털 홀로그램을 생성하는 시스템을 제안한다. 먼저, 가시광선과 적외선의 파장을 이용하여 파장에 따라 투과율이 달라지는 콜드 미러를 사용하여 각각의 시점이 같은 다시점 RGB와 깊이 영상을 얻는다. 카메라 시스템이 갖는 다양한 렌즈 왜곡을 없애기 위한 보정 과정을 거친 후에 해상도가 서로 틀린 RGB 영상과 깊이 영상의 해상도를 같게 조절한다. 그 다음, DIBR(Depth Image Based Rendering) 알고리즘을 이용하여 원하는 가상 시점의 깊이 정보와 RGB 영상을 생성한다. 그리고 깊이 정보를 이용하여 디지털 홀로그램으로 구현할 객체만을 추출한다. 마지막으로 컴퓨터 생성 홀로그램 (computer-generated hologram, CGH) 알고리즘을 이용하여 추출한 가상 시점의 객체를 디지털 홀로그램으로 변환한다.

  • PDF

3D Feature Point Based Face Segmentation in Depth Camera Images (깊이 카메라 영상에서의 3D 특징점 기반 얼굴영역 추출)

  • Hong, Ju-Yeon;Park, Ji-Young;Kim, Myoung-Hee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.454-455
    • /
    • 2012
  • 깊이 카메라에서 입력 받은 사용자의 얼굴 데이터에 morphable 모델을 fitting하여 실제 얼굴과 가까운 3D 얼굴 모델을 생성하기 위해서는 먼저 깊이 영상으로부터의 정확한 얼굴 영역 추출이 필요하다. 이를 위해 얼굴의 특징점을 기반으로 얼굴 영역 추출을 시도한다. 먼저 원본 깊이 영상을 보정하고, 컬러 영상으로부터 얼굴과 눈, 코의 영역을 탐색한 후 이를 깊이 영상에 대응시켜 눈, 코, 턱의 3차원 위치를 계산한다. 이렇게 결정된 얼굴의 주요 특징점들을 시작으로 영역을 확장함으로써 영상의 배경으로부터 얼굴 영역을 분리한다.