• Title/Summary/Keyword: 특징점 매칭

Search Result 231, Processing Time 0.031 seconds

Face Recognition using Light-EBGM(Elastic Bunch Graph Matching ) Method (Light-EBGM(Elastic Bunch Graph Matching) 방법을 이용한 얼굴인식)

  • 권만준;전명근
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.138-141
    • /
    • 2004
  • 본 논문은 EBGM(Elastic Bunch Graph Matching)기법을 이용한 얼굴인식에 대해 다룬다. 대용량 영상 정보에 대해 차원 축소를 이용한 얼굴인식 기법인 주성분기법이나 선형판별기법에서는 얼굴 영상 전체의 정보를 이용하는 반면 본 논문에서는 얼굴의 눈, 코, 입 등과 같은 얼굴 특징점에 대해 주파수와 방향각이 다른 여러 개의 가버 커널과 영상 이미지의 컨볼루션(Convolution)의 계수의 집합(Jets)을 이용한 특징 데이터를 이용한다. 하나의 얼굴 영상에 대해서는 모든 영상이 같은 크기의 특징 데이터로 표현되는 Face Graph가 생성되며, 얼굴인식 과정에서는 추출된 제트의 집합에 대해서 상호 유사도(Similarity)의 크기를 비교하여 얼굴인식을 수행한다. 본 논문에서는 기존의 EBGM방법의 Face Graph 생성 과정을 보다 간략화 한 방법을 이용하여 얼굴인식 과정에서 계산량을 줄여 속도를 개선하였다.

  • PDF

Human Body Tracking And Transmission System Suitable for Mobile Devices (모바일 기기에 적합한 인체 추적 및 전송 시스템)

  • Kwak, Nae-Joung;Song, Teuk-Sob
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.437-439
    • /
    • 2011
  • 본 논문에서는 카메라에서 입력되는 영상에서 객체의 특징 자동 추출하고 모바일 기기로 전송하여 인체의 움직임을 표현하는 시스템을 제안한다. 제안시스템은 연속된 입력영상에서 인체의 실루엣과 조인트를 자동추출하고 조인트를 추적함으로 객체를 추적한다. 추출된 특징은 객체의 각 연결점의 위치정보로 사용되며 특징을 중심으로 블록매칭 알고리즘을 적용하여 특징의 위치정보를 추적하고 모바일기기로 정보를 전송한다. 모바일 기기에서는 전송된 조인트 정보를 이용하여 인체의 움직임을 재현한다. 제안방법을 실험 동영상에 적용한 결과 인체의 실루엣과 조인트를 자동 검출하며 추출된 조인트로 인체의 매핑이 효율적으로 이루어졌다. 또한 조인트의 추적이 매핑된 인체에 반영되어 인체의 움직임도 적절히 표현되었다.

Rolled Fingerprint Merge Algorithm Using Adaptive Projection Mask (가변 투영마스크를 이용한 회전지문 정합 알고리즘에 관한 연구)

  • Baek, Young Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.176-183
    • /
    • 2013
  • We propose a rolled fingerprint merging algorithm that effectively merges plain fingerprints in consecutive frame units that are fed through rolling and detects more fingerprint minutiae in order to increase the fingerprint recognition rate. The proposed rolled fingerprint merging algorithm uses a adaptive projection mask; it contains a detector that separates plain fingerprints from the background and a projection mask generator that sequentially projects the detect ed images. In addition, in the merging unit, the pyramid-shaped projection method is used to detect merged rolled fingerprints from the generated variable projective mask, starting from the main images. Simulations show that the extracted minutia e are 46.79% more than those from plain fingerprints, and the proposed algorithm exhibits excellent performance by detecting 52.0% more good fingerprint minutiae that are needed for matching.

A Method of Constructing Robust Descriptors Using Scale Space Derivatives (스케일 공간 도함수를 이용한 강인한 기술자 생성 기법)

  • Park, Jongseung;Park, Unsang
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.764-768
    • /
    • 2015
  • Requirement of effective image handling methods such as image retrieval has been increasing with the rising production and consumption of multimedia data. In this paper, a method of constructing more effective descriptor is proposed for robust keypoint based image retrieval. The proposed method uses information embedded in the first order and second order derivative images, in addition to the scale space image, for the descriptor construction. The performance of multi-image descriptor is evaluated in terms of the similarities in keypoints with a public domain image database that contains various image transformations. The proposed descriptor shows significant improvement in keypoint matching with minor increase of the length.

Design and Implementation of Video Clip Service System in Augmented Reality Using the SURF Algorithm (SURF 알고리즘을 이용한 증강현실 동영상 서비스 시스템의 설계 및 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.1
    • /
    • pp.22-28
    • /
    • 2015
  • In this paper, a service system which shows linked video clips from the static images extracted from newspapers, magazines, photo albums and etc in an augmented reality. First, the system uses SURF algorithm to extract features from the original photos printed in the media and stores them with the linked video clips. Next, when a photo is taken by using a camera from mobile devices such as smart phones, the system extracts features in real time, search a linked video clip matching the original image, and shows it on the smart phone in an augmented reality. The proposed system is applied to Android smart phone devices and the test results verify that the proposed system operates not only on normal photos but also on partially damaged photos.

Fast Image Stitching Based on Image Edge Line Segmentation Algorithm (이미지 Edge Line Segmentation 알고리즘을 통한 고속 이미지 스티칭 기법)

  • Chae, Hogyun;Park, Healim;Kim, Yunjung;Im, Jiheon;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.309-312
    • /
    • 2018
  • 지금까지 영상 콘텐츠 제작 기술의 발전은 SD(Standard Definition)에서 시작하여 HD(High Definition)와 FHD(Full High Definition)를 거쳐, UHD(Ultra High Definition)에 이르기까지 화질을 중심으로 이루어져 왔다. UHD 에 이르며 육안으로는 그 이상의 해상도로 제작된 콘텐츠와 구분하는 것이 힘들어졌으며, 이에 영상 콘텐츠 제작은 화질이 아닌 제한된 촬영 장비들로부터 촬영 방법, 영상 화각의 개선 작업 등으로 그 방향을 전환하고 있다. 이의 연장선 상에서 360 도 영상에 대한 기술개발이 활발히 이루어 지고 있다. 방송 분야에서는 360 도 영상의 실시간 스트리밍 적용 가능성이 모색되고 있는데, 이것이 가능 하려면 대량의 동영상 데이터를 실시간으로 스티칭하여 전달하는 기술이 필요하다. 따라서 고속 이미지 스티칭이 가능해질 경우 실시간 동영상 스티칭을 통해 방송 통신 분야에서의 서비스 향상에 기여할 것으로 보인다. 본 논문은 이미지의 edge 정보를 방향성을 가진 데이터로 분할하여 특징점을 추출하고, 이후 가중치를 통한 특징점 매칭으로 기존의 이미지 스티칭 방법 보다 빠른 속도의 알고리즘을 제안한다.

  • PDF

Experiment for 3D Coregistration between Scanned Point Clouds of Building using Intensity and Distance Images (강도영상과 거리영상에 의한 건물 스캐닝 점군간 3차원 정합 실험)

  • Jeon, Min-Cheol;Eo, Yang-Dam;Han, Dong-Yeob;Kang, Nam-Gi;Pyeon, Mu-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.1
    • /
    • pp.39-45
    • /
    • 2010
  • This study used the keypoint observed simultaneously on two images and on twodimensional intensity image data, which was obtained along with the two point clouds data that were approached for automatic focus among points on terrestrial LiDAR data, and selected matching point through SIFT algorithm. Also, for matching error diploid, RANSAC algorithm was applied to improve the accuracy of focus. As calculating the degree of three-dimensional rotating transformation, which is the transformation-type parameters between two points, and also the moving amounts of vertical/horizontal, the result was compared with the existing result by hand. As testing the building of College of Science at Konkuk University, the difference of the transformation parameters between the one through automatic matching and the one by hand showed 0.011m, 0.008m, and 0.052m in X, Y, Z directions, which concluded to be used as the data for automatic focus.

Vehicle Detection and Tracking using Billboard Sweep Stereo Matching Algorithm (빌보드 스윕 스테레오 시차정합 알고리즘을 이용한 차량 검출 및 추적)

  • Park, Min Woo;Won, Kwang Hee;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.764-781
    • /
    • 2013
  • In this paper, we propose a highly precise vehicle detection method with low false alarm using billboard sweep stereo matching and multi-stage hypothesis generation. First, we capture stereo images from cameras established in front of the vehicle and obtain the disparity map in which the regions of ground plane or background are removed using billboard sweep stereo matching algorithm. And then, we perform the vehicle detection and tracking on the labeled disparity map. The vehicle detection and tracking consists of three steps. In the learning step, the SVM(support vector machine) classifier is obtained using the features extracted from the gabor filter. The second step is the vehicle detection which performs the sobel edge detection in the image of the left camera and extracts candidates of the vehicle using edge image and billboard sweep stereo disparity map. The final step is the vehicle tracking using template matching in the next frame. Removal process of the tracking regions improves the system performance in the candidate region of the vehicle on the succeeding frames.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Extended SURF Algorithm with Color Invariant Feature and Global Feature (컬러 불변 특징과 광역 특징을 갖는 확장 SURF(Speeded Up Robust Features) 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.58-67
    • /
    • 2009
  • A correspondence matching is one of the important tasks in computer vision, and it is not easy to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. A SURF(Speeded Up Robust Features) algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform) with closely maintaining the matching performance. However, because SURF considers only gray image and local geometric information, it is difficult to match corresponding points on the image where similar local patterns are scattered. In order to solve this problem, this paper proposes an extended SURF algorithm that uses the invariant color and global geometric information. The proposed algorithm can improves the matching performance since the color information and global geometric information is used to discriminate similar patterns. In this paper, the superiority of the proposed algorithm is proved by experiments that it is compared with conventional methods on the image where an illumination and a view point are changed and similar patterns exist.