• Title/Summary/Keyword: feature points

Search Result 1,130, Processing Time 0.026 seconds

A Background Segmentation and Feature Point Extraction Method of Human Motion Recognition (동작인식을 위한 배경 분할 및 특징점 추출 방법)

  • You, Hwi-Jong;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.11 no.2
    • /
    • pp.161-166
    • /
    • 2011
  • In this paper, we propose a novel background segmentation and feature point extraction method of a human motion for the augmented reality game. First, our method transforms input image from RGB color space to HSV color space, then segments a skin colored area using double threshold of H, S value. And it also segments a moving area using the time difference images and then removes the noise of the area using the Hessian affine region detector. The skin colored area with the moving area is segmented as a human motion. Next, the feature points for the human motion are extracted by calculating the center point for each block in the previously obtained image. The experiments on various input images show that our method is capable of correct background segmentation and feature points extraction 12 frames per second.

Landmark Recognition Method based on Geometric Invariant Vectors (기하학적 불변벡터기반 랜드마크 인식방법)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.173-182
    • /
    • 2005
  • In this paper, we propose a landmark recognition method which is irrelevant to the camera viewpoint on the navigation for localization. Features in previous research is variable to camera viewpoint, therefore due to the wealth of information, extraction of visual landmarks for positioning is not an easy task. The proposed method in this paper, has the three following stages; first, extraction of features, second, learning and recognition, third, matching. In the feature extraction stage, we set the interest areas of the image. where we extract the corner points. And then, we extract features more accurate and resistant to noise through statistical analysis of a small eigenvalue. In learning and recognition stage, we form robust feature models by testing whether the feature model consisted of five corner points is an invariant feature irrelevant to viewpoint. In the matching stage, we reduce time complexity and find correspondence accurately by matching method using similarity evaluation function and Graham search method. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed methods.

  • PDF

Feature Selection of Fuzzy Pattern Classifier by using Fuzzy Mapping (퍼지 매핑을 이용한 퍼지 패턴 분류기의 Feature Selection)

  • Roh, Seok-Beom;Kim, Yong Soo;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.6
    • /
    • pp.646-650
    • /
    • 2014
  • In this paper, in order to avoid the deterioration of the pattern classification performance which results from the curse of dimensionality, we propose a new feature selection method. The newly proposed feature selection method is based on Fuzzy C-Means clustering algorithm which analyzes the data points to divide them into several clusters and the concept of a function with fuzzy numbers. When it comes to the concept of a function where independent variables are fuzzy numbers and a dependent variable is a label of class, a fuzzy number should be related to the only one class label. Therefore, a good feature is a independent variable of a function with fuzzy numbers. Under this assumption, we calculate the goodness of each feature to pattern classification problem. Finally, in order to evaluate the classification ability of the proposed pattern classifier, the machine learning data sets are used.

Corresponding Points Tracking of Aerial Sequence Images

  • Ochirbat, Sukhee;Shin, Sung-Woong;Yoo, Hwan-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.4
    • /
    • pp.11-16
    • /
    • 2008
  • The goal of this study is to evaluate the KLT(Kanade-Lucas-Tomasi) for extracting and tracking the features using various data acquired from UAV. Sequences of images were collected for Jangsu-Gun area to perform the analysis. Four data sets were subjected to extract and track the features using the parameters of the KLT. From the results of the experiment, more than 90 percent of the features extracted from the first frame could successfully track through the next frame when the shift between frames is small. But when the frame to frame motion is large in non-consecutive frames, KLT tracker is failed to track the corresponding points. Future research will be focused on feature tracking of sequence frames with large shift and rotation.

  • PDF

Laser Image SLAM based on Image Matching for Navigation of a Mobile Robot (이동 로봇 주행을 위한 이미지 매칭에 기반한 레이저 영상 SLAM)

  • Choi, Yun Won;Kim, Kyung Dong;Choi, Jung Won;Lee, Suk Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.2
    • /
    • pp.177-184
    • /
    • 2013
  • This paper proposes an enhanced Simultaneous Localization and Mapping (SLAM) algorithm based on matching laser image and Extended Kalman Filter (EKF). In general, laser information is one of the most efficient data for localization of mobile robots and is more accurate than encoder data. For localization of a mobile robot, moving distance information of a robot is often obtained by encoders and distance information from the robot to landmarks is estimated by various sensors. Though encoder has high resolution, it is difficult to estimate current position of a robot precisely because of encoder error caused by slip and backlash of wheels. In this paper, the position and angle of the robot are estimated by comparing laser images obtained from laser scanner with high accuracy. In addition, Speeded Up Robust Features (SURF) is used for extracting feature points at previous laser image and current laser image by comparing feature points. As a result, the moving distance and heading angle are obtained based on information of available points. The experimental results using the proposed laser slam algorithm show effectiveness for the SLAM of robot.

An Object-based Stereo Matching Method Using Block-based Segmentation (블록 기반 영역 분할을 이용한 객체 기반 스테레오 정합 기법)

  • Kwak No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.5 no.4
    • /
    • pp.257-263
    • /
    • 2004
  • This paper is related to the object-based stereo matching algorithm which makes it possible to estimate inner-region disparities for each segmented region. First, several sample points are selected for effectively representing the segmented region, Next, stereo matching is applied to the small area within segmented region which existed in the neighborhood or each sample point. Finally, inner-region disparities are interpolated using a plane equation with disparity of each selected sample. According to the proposed method, the problem of feature-based method that the depth estimation is possible only in the feature points can be solved through the propagation of the disparity in the sample point into the inside of the region. Also, as selecting sample points in contour of segmented region we can effectively suppress obscurity which is occurred in the depth estimation of the monotone region in area-based methods.

  • PDF

Registration of the 3D Range Data Using the Curvature Value (곡률 정보를 이용한 3차원 거리 데이터 정합)

  • Kim, Sang-Hoon;Kim, Tae-Eun
    • Convergence Security Journal
    • /
    • v.8 no.4
    • /
    • pp.161-166
    • /
    • 2008
  • This paper proposes a new approach to align 3D data sets by using curvatures of feature surface. We use the Gaussian curvatures and the covariance matrix which imply the physical characteristics of the model to achieve registration of unaligned 3D data sets. First, the physical characteristics of local area are obtained by the Gaussian curvature. And the camera position of 3D range finder system is calculated from by using the projection matrix between 3D data set and 2D image. Then, the physical characteristics of whole area are obtained by the covariance matrix of the model. The corresponding points can be found in the overlapping region with the cross-projection method and it concentrates by removed points of self-occlusion. By the repeatedly the process discussed above, we finally find corrected points of overlapping region and get the optimized registration result.

  • PDF

Feature Classification of Hanguel Patterns by Distance Transformation method (거리변환법에 의한 한글패턴의 특징분류)

  • Koh, Chan;Lee, Dai-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.6
    • /
    • pp.650-662
    • /
    • 1989
  • In this paper, a new algorithm for feature extraction and classification of recognizing Hanguel patterns is proposed. Inputed patterns classify into six basic formal patterns and divided into subregion of Hanguel phoneme and extract the crook feature from position information of the each subregion. Hanguel patterns are defined and are made of the indexed-sequence file using these crook features points. Hanguel patterns are recognized by retrievignt ehses two files such as feature indexed-sequence file and standard dictionary file. Thi paper show that the algorithm is very simple and easily construct the software system. Experimental result presents the output of feature extraction and grouping of input patterns. Proposed algorithm extract the crooked feature using distance transformation method within the rectangle of enclosure the characters. That uses the informationof relative position feature. It represents the 97% of recognition ratio.

  • PDF

Face Identification Method Using Face Shape Independent of Lighting Conditions

  • Takimoto, H.;Mitsukura, Y.;Akamatsu, N.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2213-2216
    • /
    • 2003
  • In this paper, we propose the face identification method which is robust for lighting based on the feature points method. First of all, the proposed method extracts an edge of facial feature. Then, by the hough transform, it determines ellipse parameters of each facial feature from the extracted edge. Finally, proposed method performs the face identification by using parameters. Even if face image is taken under various lighting condition, it is easy to extract the facial feature edge. Moreover, it is possible to extract a subject even if the object has not appeared enough because this method extracts approximately the parameters by the hough transformation. Therefore, proposed method is robust for the lighting condition compared with conventional method. In order to show the effectiveness of the proposed method, computer simulations are done by using the real images.

  • PDF