• Title/Summary/Keyword: 다중시점영상

Search Result 74, Processing Time 0.027 seconds

Study on the panorama image processing using the SURF feature detector and technicians. (SURF 특징 검출기와 기술자를 이용한 파노라마 이미지 처리에 관한 연구)

  • Kim, Nam-woo;Hur, Chang-Wu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.699-702
    • /
    • 2015
  • 다중의 영상을 이용하여 하나의 파노라마 영상을 제작하는 기법은 컴퓨터 비전, 컴퓨터 그래픽스 등과 같은 여러 분야에서 널리 연구되고 있다. 파노라마 영상은 하나의 카메라에서 얻을 수 있는 영상의 한계, 즉 예를 들어 화각, 화질, 정보량 등의 한계를 극복할 수 있는 좋은 방법으로서 가상현실, 로봇비전 등과 같이 광각의 영상이 요구되는 다양한 분야에서 응용될 수 있다. 파노라마 영상은 단일 영상과 비교하여 보다 큰 몰입감을 제공한다는 점에서 큰 의미를 갖는다. 현재 다양한 파노라마 영상 제작 기법들이 존재하지만, 대부분의 기법들이 공통적으로 파노라마 영상을 구성할 때 각 영상에 존재하는 특징점 및 대응점을 검출하는 방식을 사용하고 있다. 본 논문에서 사용한 SURF(Speeded Up Robust Features) 알고리즘은 영상의 특징점을 검출할 때 영상의 흑백정보와 지역 공간 정보를 활용하는데, 영상의 크기 변화와 시점 검출에 강하며 SIFT(Scale Invariant Features Transform) 알고리즘에 비해 속도가 빠르다는 장점이 있어서 널리 사용되고 있다. 본 논문에서는 두 영상 사이 또는 하나의 영상과 여러 영상 사이에 대응되는 매칭을 계산하여 파노라마영상을 생성하는 처리 방법을 구현하고 기술하였다.

  • PDF

Stereo-To-Multiview Conversion System Using FPGA and GPU Device (FPGA와 GPU를 이용한 스테레오/다시점 변환 시스템)

  • Shin, Hong-Chang;Lee, Jinwhan;Lee, Gwangsoon;Hur, Namho
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.616-626
    • /
    • 2014
  • In this paper, we introduce a real-time stereo-to-multiview conversion system using FPGA and GPU. The system is based on two different devices so that it consists of two major blocks. The first block is a disparity estimation block that is implemented on FPGA. In this block, each disparity map of stereoscopic video is estimated by DP(dynamic programming)-based stereo matching. And then the estimated disparity maps are refined by post-processing. The refined disparity map is transferred to the GPU device through USB 3.0 and PCI-express interfaces. Stereoscopic video is also transferred to the GPU device. These data are used to render arbitrary number of virtual views in next block. In the second block, disparity-based view interpolation is performed to generate virtual multi-view video. As a final step, all generated views have to be re-arranged into a single image at full resolution for presenting on the target autostereoscopic 3D display. All these steps of the second block are performed in parallel on the GPU device.

Multi-view Image Rendering on Tabletop Display (테이블탑 디스플레이에서의 다중 시점 영상 가시화)

  • Kang, Sun-Ho;Jung, Soon-Ki
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.249-251
    • /
    • 2011
  • 본 논문에서는 테이블탑 디스플레이에서 하나의 화면에 그룹에 따라 서로 다른 영상을 보여주는 시스템을 제안한다. 시간차가 나는 두 영상을 이용해 안경의 사용자 그룹에게는 하나의 영상만을 보여주고 비사용자 그룹에게는 두 영상의 합성을 보여주어 두 그룹의 테이블탑 사용자에게 기존의 영상에 서로 다른 문자 영상을 합성하여 보여줄 수 있다. 본 논문에서 제안하는 흑백출력법과 매몰법은 3차원 디스플레이를 이용한 문자 출력하는 두가지 방법이다. 흑백출력법은 안경을 사용한 그룹은 흑백문자로 표현하고 비사용자 그룹은 백색으로 표현하여 보여주는 방법이다. 매몰법은 안경 사용자 그룹이 보는 영상을 매몰시켜 그 위에 안경 비사용자 그룹이 볼 영상을 추가하는 방법이다.

Motion tracking system using multiple sensors (다중 센서를 이용한 사용자 위치 추적 시스템)

  • Lim, Jeong-Hun;Hong, Ki-Sung;Bae, Yun-Jin;Yang, Wol-Sung;Choi, Hyun-Jun;Seo, Young-Ho;Yu, Ji-Sang;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.18-20
    • /
    • 2010
  • 본 논문에서는 3차원 모니터를 시청하는 시청자의 위치를 추적한 후 시청위치에 따라 다른 시점의 영상을 제공하는 시스템을 제안한다. 본 시스템은 센서들의 신호를 제어하는 CPU, 3개의 초음파센서, 1개의 PIR(Passive infrared radiation)센서, 센서별 신호를 분기해주는 PLD(programmable logic device)로 구성되어 있다. 3개의 초음파 센서를 이용해 20개의 탐색구역 내에 있는 시청자의 정확한 위치를 추적한다. 시스템을 구현한 결과 전방 5m, 좌우 3m의 탐색범위 내에서 시청자의 위치를 정확히 추적하여 해당 시점의 영상을 디스플레이 하였다.

  • PDF

Land Cover Mapping and Availability Evaluation Based on Drone Images with Multi-Spectral Camera (다중분광 카메라 탑재 드론 영상 기반 토지피복도 제작 및 활용성 평가)

  • Xu, Chun Xu;Lim, Jae Hyoung;Jin, Xin Mei;Yun, Hee Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.589-599
    • /
    • 2018
  • The land cover map has been produced by using satellite and aerial images. However, these two images have the limitations in spatial resolution, and it is difficult to acquire images of a area at desired time because of the influence of clouds. In addition, it is costly and time-consuming that mapping land cover map of a small area used by satellite and aerial images. This study used multispectral camera-based drone to acquire multi-temporal images for orthoimages generation. The efficiency of produced land cover map was evaluated using time series analysis. The results indicated that the proposed method can generated RGB orthoimage and multispectral orthoimage with RMSE (Root Mean Square Error) of ${\pm}10mm$, ${\pm}11mm$, ${\pm}26mm$ and ${\pm}28mm$, ${\pm}27mm$, ${\pm}47mm$ on X, Y, H respectively. The accuracy of the pixel-based and object-based land cover map was analyzed and the results showed that the accuracy and Kappa coefficient of object-based classification were higher than that of pixel-based classification, which were 93.75%, 92.42% on July, 92.50%, 91.20% on October, 92.92%, 91.77% on February, respectively. Moreover, the proposed method can accurately capture the quantitative area change of the object. In summary, the suggest study demonstrated the possibility and efficiency of using multispectral camera-based drone in production of land cover map.

Implementation of Multiview Stereoscopic 3D Display System using Volume Holographic Lenticular Sheet (VHLS 광학판 기반의 다시점 스테레오스코픽 3D 디스플레이 시스템의 구현)

  • 이상우;이맹호;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.716-725
    • /
    • 2004
  • In this paper, a new multiview stereoscopic 3D display system using a VHLS(volume holographic lenticular sheet) is suggested. The VHLS, which acts just like an optical direction modulator, can be implemented by recording the diffraction gratings corresponding each directional vector of the multiview stereoscopic images in the holographic recording material by using the angularly multiplexed recording property of the conventional volume hologram. Then, this fabricated VHLS is attached to the panel of a LCD spatial light modulator and used to diffract each of the multiview image loaded in a SLM to the corresponding spatial direction for making a 3D stereo view-zone. Accordingly, in this paper, the operational principle and characteristics of the VHLS are analyzed and an optimized 4-view VHLS is fabricated by using a commercial photopolymer. Then, a new VHLS-based 4-view stereoscopic 3D display system is implemented. Through some experimental results using a 4-view image synthesized with adaptive disparity estimation algorithm, it is suggested that implementation of a new VHLS-based multiview stereoscopic 3D display system can be possible.

Multi-sensor Image Registration Using Normalized Mutual Information and Gradient Orientation (정규 상호정보와 기울기 방향 정보를 이용한 다중센서 영상 정합 알고리즘)

  • Ju, Jae-Yong;Kim, Min-Jae;Ku, Bon-Hwa;Ko, Han-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.6
    • /
    • pp.37-48
    • /
    • 2012
  • Image registration is a process to establish the spatial correspondence between the images of same scene, which are acquired at different view points, at different times, or by different sensors. In this paper, we propose an effective registration method for images acquired by multi-sensors, such as EO (electro-optic) and IR (infrared) sensors. Image registration is achieved by extracting features and finding the correspondence between features in each input images. In the recent research, the multi-sensor image registration method that finds corresponding features by exploiting NMI (Normalized Mutual Information) was proposed. Conventional NMI-based image registration methods assume that the statistical correlation between two images should be global, however images from EO and IR sensors often cannot satisfy this assumption. Therefore the registration performance of conventional method may not be sufficient for some practical applications because of the low accuracy of corresponding feature points. The proposed method improves the accuracy of corresponding feature points by combining the gradient orientation as spatial information along with NMI attributes and provides more accurate and robust registration performance. Representative experimental results prove the effectiveness of the proposed method.

Panoramic Navigation using Orthogonal Cross Cylinder Mapping and Image-Segmentation Based Environment Modeling (직각 교차 실린더 매핑과 영상 분할 기반 환경 모델링을 이용한 파노라마 네비게이션)

  • 류승택;조청운;윤경현
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.138-148
    • /
    • 2003
  • Orthogonal Cross Cylinder mapping and segmentation based modeling methods have been implemented for constructing the image-based navigation system in this paper. The Orthogonal Cross Cylinder (OCC) is the object expressed by the intersection area that occurs when a cylinder is orthogonal with another. OCC mapping method eliminates the singularity effect caused in the environment maps and shows an almost even amount of area for the environment occupied by a single texel. A full-view image from a fixed point-of-view can be obtained with OCC mapping although it becomes difficult to express another image when the point-of-view has been changed. The OCC map is segmented according to the objects that form the environment and the depth value is set by the characteristics of the classified objects for the segmentation based modeling. This method can easily be implemented on an environment map and makes the environment modeling easier through extracting the depth value by the image segmentation. An environment navigation system with a full-view can be developed with these methods.

A High-Quality Occlusion Filling Method Using Image Inpainting (영상 인페인팅을 이용한 고품질의 가려짐 영역 보간 방법)

  • Kim, Yong-Jin;Lee, Sang-Hwa;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.15 no.1
    • /
    • pp.3-13
    • /
    • 2010
  • In this paper, we propose a method for filling out the occlusions in generating multi-view images from one source image and its ground-truth depth image. The method is based on image inpainting and layered interpolation. The source image is first divided into several layers using depth information. The occlusions are interpolated separately in every layered image using the image inpainting algorithm. Finally, the interpolated layered images are combined to obtain different viewpoint images. Interpolating occlusions with depth-correlated texture information that is contained to each layer makes it possible to obtain more detailed and accurate results than previous methods. The effectiveness of the proposed method is shown through experimental results.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.