• Title/Summary/Keyword: 워핑

Search Result 166, Processing Time 0.029 seconds

Performance Enhancement of Marker Detection and Recognition using SVM and LDA (SVM과 LDA를 이용한 마커 검출 및 인식의 성능 향상)

  • Kang, Sun-Kyoung;So, In-Mi;Kim, Young-Un;Lee, Sang-Seol;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.923-933
    • /
    • 2007
  • In this paper, we present a method for performance enhancement of the marker detection system by using SVM(Support Vector Machine) and LDA(Linear Discriminant Analysis). It converts the input image to a binary image and extracts contours of objects in the binary image. After that, it approximates the contours to a list of line segments. It finds quadrangle by using geometrical features which are extracted from the approximated line segments. It normalizes the shape of extracted quadrangle into exact squares by using the warping technique and scale transformation. It extracts feature vectors from the square image by using principal component analysis. It then checks if the square image is a marker image or a non-marker image by using a SVM classifier. After that, it computes feature vectors by using LDA for the extracted marker images. And it calculates the distance between feature vector of input marker image and those of standard markers. Finally, it recognizes the marker by using minimum distance method. Experimental results show that the proposed method achieves enhancement of recognition rate with smaller feature vectors by using LDA and it can decrease false detection errors by using SVM.

  • PDF

Hole-Filling Method Using Extrapolated Spatio-temporal Background Information (추정된 시공간 배경 정보를 이용한 홀채움 방식)

  • Kim, Beomsu;Nguyen, Tien Dat;Hong, Min-Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.67-80
    • /
    • 2017
  • This paper presents a hole-filling method using extrapolated spatio-temporal background information to obtain a synthesized view. A new temporal background model using non-overlapped patch based background codebook is introduced to extrapolate temporal background information In addition, a depth-map driven spatial local background estimation is addressed to define spatial background constraints that represent the lower and upper bounds of a background candidate. Background holes are filled by comparing the similarities between the temporal background information and the spatial background constraints. Additionally, a depth map-based ghost removal filter is described to solve the problem of the non-fit between a color image and the corresponding depth map of a virtual view after 3-D warping. Finally, an inpainting is applied to fill in the remaining holes with the priority function that includes a new depth term. The experimental results demonstrated that the proposed method led to results that promised subjective and objective improvement over the state-of-the-art methods.

Temporally-Consistent High-Resolution Depth Video Generation in Background Region (배경 영역의 시간적 일관성이 향상된 고해상도 깊이 동영상 생성 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.414-420
    • /
    • 2015
  • The quality of depth images is important in the 3D video system to represent complete 3D contents. However, the original depth image from a depth camera has a low resolution and a flickering problem which shows vibrating depth values in terms of temporal meaning. This problem causes an uncomfortable feeling when we look 3D contents. In order to solve a low resolution problem, we employ 3D warping and a depth weighted joint bilateral filter. A temporal mean filter can be applied to solve the flickering problem while we encounter a residual spectrum problem in the depth image. Thus, after classifying foreground andbackground regions, we use an upsampled depth image for a foreground region and temporal mean image for background region.Test results shows that the proposed method generates a time consistent depth video with a high resolution.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Acceleration of Feature-Based Image Morphing Using GPU (GPU를 이용한 특징 기반 영상모핑의 가속화)

  • Kim, Eun-Ji;Yoon, Seung-Hyun;Lee, Jieun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.13-24
    • /
    • 2014
  • In this study, a graphics-processing-unit (GPU)-based acceleration technique is proposed for the feature-based image morphing. This technique uses the depth-buffer of the graphics hardware to calculate efficiently the shortest distance between a pixel and the control lines. The pairs of control lines between the source image and the destination image are determined by user's input, and the distance function of each control line is rendered using two rectangles and two cones. The distance between each pixel and its nearest control line is stored in the depth buffer through the graphics pipeline, and this is used to conduct the morphing operation efficiently. The pixel-unit morphing operation is parallelized using the compute unified device architecture (CUDA) to reduce the morphing time. We demonstrate the efficiency of the proposed technique using several experimental results.

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

A Moving Synchronization Technique for Virtual Target Overlay (가상표적 전시를 위한 이동 동기화 기법)

  • Kim Gye-Young;Jang Seok-Woo
    • Journal of Internet Computing and Services
    • /
    • v.7 no.4
    • /
    • pp.45-55
    • /
    • 2006
  • This paper proposes a virtual target overlay technique for a realistic training simulation which projects a virtual target on ground-based CCD images according to an appointed scenario. This method creates a realistic 3D model for instructors by using high resolution GeoTIFF (Geographic Tag Image File Format) satellite images and DTED(Digital Terrain Elevation Data), and it extracts road areas from the given CCD images for both instructors and trainees, Since there is much difference in observation position, resolution, and scale between satellite Images and ground-based sensor images, feature-based matching faces difficulty, Hence, we propose a moving synchronization technique that projects the targets on sensor images according to the moving paths marked on 3D satellite images. Experimental results show the effectiveness of the proposed algorithm with satellite and sensor images of Daejoen.

  • PDF

Effects of Composite Couplings on Hub Loads of Hingeless Rotor Blade (무힌지 로터 블레이드의 허브하중에 대한 복합재료 연성거동 연구)

  • Lee, Ju-Young;Jung, Sung-Nam
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.32 no.7
    • /
    • pp.29-36
    • /
    • 2004
  • In this work, the effect of composite couplings on hub loads of a hingeless rotor in forward flight is investigated. The hingeless composite rotor blade is idealized as a laminated thin-walled box-beam. The nonclassical effects such as transverse shear, torsional warping are considered in the structural formulation. The nonlinear differential equations of motion are obtained by applying Hamilton's principle. The blade response and hub loads are calculated using a finite element formulation in space and time. The aerodynamic forces acting on the blade are calculated by quasi-steady strip theory. The theory includes the effects of reversed flow and compressibility. The magnitude of elastic couplings obtained by MSC/NASTRAN is compared with the classical pitch-flap $({\delta}3)$ or $pitch-lag({\alpha}1)$ coupling. It is found that the elastic couplings have a substantial effect on the behavior of $N_b/rev$ hub loads. Nearly 10 to 40% of hub loads is reduced by appropriately tailoring the fiber orientation angles in the laminae of the composite blade.

3D Cloud Animation using Cloud Modeling Method of 2D Meteorological Satellite Images (2차원 기상 위성 영상의 구름 모델링 기법을 이용한 3차원 구름 애니메이션)

  • Lee, Jeong-Jin;Kang, Moon-Koo;Lee, Ho;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.147-156
    • /
    • 2010
  • In this paper, we propose 3D cloud animation by cloud modeling method of 2D images retrieved from a meteorological satellite. First, on the satellite images, we locate numerous control points to perform thin-plate spline warping analysis between consecutive frames for the modeling of cloud motion. In addition, the spectrum channels of visible and infrared wavelengths are used to determine the amount and altitude of clouds for 3D cloud image reconstruction. Pre-integrated volume rendering method is used to achieve seamless inter-laminar shades in real-time using small number of slices of the volume data. The proposed method could successfully construct continuously moving 3D clouds from 2D satellite images at an acceptable speed and image quality.