• Title/Summary/Keyword: Laplacian pyramid

Search Result 34, Processing Time 0.018 seconds

Depth-Map Generation using Fusion of Foreground Depth Map and Background Depth Map (전경 깊이 지도와 배경 깊이 지도의 결합을 이용한 깊이 지도 생성)

  • Kim, Jin-Hyun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.275-278
    • /
    • 2012
  • 본 논문에서 2D-3D 자동 영상 변환을 위하여 2D 상으로부터 깊이 지도(depth map)을 생성하는 방법을 제안한다. 제안하는 방법은 보다 정확한 깊이 지도 생성을 위해 영상의 전경 깊이 지도(foreground depth map)와 배경 깊이 지도(background depth map)를 각각 생성 한 후 결합함으로써 보다 정확한 깊이 지도를 생성한다. 먼저, 전경 깊이 지도를 생성하기 위해서 라플라시안 피라미드(laplacian pyramid)를 이용하여 포커스/디포커스 깊이 지도(focus/defocus depth map)를 생성한다. 그리고 블록정합(block matching)을 통해 획득한 움직임 시차(motion parallax)를 이용하여 움직임 시차 깊이 지도를 생성한다. 포커스/디포커스 깊이 지도는 평탄영역(homogeneous region)에서 깊이 정보를 추출하지 못하고, 움직임 시차 깊이 지도는 움직임 시차가 발생하지 않는 영상에서 깊이 정보를 추출하지 못한다. 이들 깊이 지도를 결합함으로써 각 깊이 지도가 가지는 문제점을 해결하였다. 선형 원근감(linear perspective)와 선 추적(line tracing) 방법을 적용하여 배경깊이 지도를 생성한다. 이렇게 생성된 전경 깊이 지도와 배경 깊이 지도를 결합하여 보다 정확한 깊이 지도를 생성한다. 실험 결과, 제안하는 방법은 기존의 방법들에 비해 더 정확한 깊이 지도를 생성하는 것을 확인할 수 있었다.

  • PDF

A Hierarchical Stereo Matching Algorithm Using Wavelet Representation (웨이브릿 변환을 이용한 계층적 스테레오 정합)

  • 김영석;이준재;하영호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.74-86
    • /
    • 1994
  • In this paper a hierarchical stereo matching algorithm to obtain the disparity in wavelet transformed domain by using locally adaptive window and weights is proposed. The pyramidal structure obtained by wavelet transform is used to solve the loss of information which the conventional Gaussian or Laplacian pyramid have. The wavelet transformed images are decomposed into the blurred image the horizontal edges the vertical edges and the diagonal edges. The similarity between each wavelet channel of left and right image determines the relative importance of each primitive and make the algorithm perform the area-based and feature-based matching adaptively. The wavelet transform can extract the features that have the dense resolution as well as can avoid the duplication or loss of information. Meanwhile the variable window that needs to obtain precise and stable estimation of correspondense is decided adaptively from the disparities estimated in coarse resolution and LL(low-low) channel of wavelet transformed stereo image. Also a new relaxation algorithm that can reduce the false match without the blurring of the disparity edge is proposed. The experimental results for various images show that the proposed algorithm has good perfpormance even if the images used in experiments have the unfavorable conditions.

  • PDF

Multipurpose Watermarking Scheme Based on Contourlet Transform (컨투어렛 변환 기반의 다중 워터마킹 기법)

  • Kim, Ji-Hoon;Lee, Suk-Hwan;Park, Seung-Seob;Kim, Ji-Hong;Oh, Sei-Woong;Seo, Yong-Su;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.7
    • /
    • pp.929-940
    • /
    • 2009
  • This paper presents multipurpose watermarking scheme in coutourlet transform domain for copyright protection, authentication and transform detection. Since contourlet transform can detect more multi direction edge and smooth contour than wavelet transform, the proposed scheme embeds multi watermarks in contourlet domain based on 4-level Laplacian pyramid and 2-level directional filter bank. In the first stage of the robust watermarking scheme for copyright protection, we generates the sequence of circle patterns according to watermark bits and projects these patterns into the average of magnitude coefficients of high frequency directional subbands. Then the watermark bit is embedded into variance distribution of the projected magnitude coefficients. In the second stage that is the semi-fragile watermarking scheme for authentication and transform detection, we embed the binary watermark image in the low frequency subband of higher level by using adaptive quantization modulation scheme. From the evaluation experiment using Checkmark 2.1, we verified that the proposed scheme is superior to the conventional scheme in a view of the robustness and the invisibility.

  • PDF

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.