• Title/Summary/Keyword: local feature extraction

Search Result 185, Processing Time 0.026 seconds

Face Recognition using Modified Local Directional Pattern Image (Modified Local Directional Pattern 영상을 이용한 얼굴인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.205-208
    • /
    • 2013
  • Generally, binary pattern transforms have been used in the field of the face recognition and facial expression, since they are robust to illumination. Thus, this paper proposes an illumination-robust face recognition system combining an MLDP, which improves the texture component of the LDP, and a 2D-PCA algorithm. Unlike that binary pattern transforms such as LBP and LDP were used to extract histogram features, the proposed method directly uses the MLDP image for feature extraction by 2D-PCA. The performance evaluation of proposed method was carried out using various algorithms such as PCA, 2D-PCA and Gabor wavelets-based LBP on Yale B and CMU-PIE databases which were constructed under varying lighting condition. From the experimental results, we confirmed that the proposed method showed the best recognition accuracy.

AdaBoost-based Gesture Recognition Using Time Interval Window Applied Global and Local Feature Vectors with Mono Camera (모노 카메라 영상기반 시간 간격 윈도우를 이용한 광역 및 지역 특징 벡터 적용 AdaBoost기반 제스처 인식)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.3
    • /
    • pp.471-479
    • /
    • 2018
  • Recently, the spread of smart TV based Android iOS Set Top box has become common. This paper propose a new approach to control the TV using gestures away from the era of controlling the TV using remote control. In this paper, the AdaBoost algorithm is applied to gesture recognition by using a mono camera. First, we use Camshift-based Body tracking and estimation algorithm based on Gaussian background removal for body coordinate extraction. Using global and local feature vectors, we recognized gestures with speed change. By tracking the time interval trajectories of hand and wrist, the AdaBoost algorithm with CART algorithm is used to train and classify gestures. The principal component feature vector with high classification success rate is searched using CART algorithm. As a result, 24 optimal feature vectors were found, which showed lower error rate (3.73%) and higher accuracy rate (95.17%) than the existing algorithm.

Vehicle Recognition using NMF in Urban Scene (도심 영상에서의 비음수행렬분해를 이용한 차량 인식)

  • Ban, Jae-Min;Lee, Byeong-Rae;Kang, Hyun-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7C
    • /
    • pp.554-564
    • /
    • 2012
  • The vehicle recognition consists of two steps; the vehicle region detection step and the vehicle identification step based on the feature extracted from the detected region. Features using linear transformations have the effect of dimension reduction as well as represent statistical characteristics, and show the robustness in translation and rotation of objects. Among the linear transformations, the NMF(Non-negative Matrix Factorization) is one of part-based representation. Therefore, we can extract NMF features with sparsity and improve the vehicle recognition rate by the representation of local features of a car as a basis vector. In this paper, we propose a feature extraction using NMF suitable for the vehicle recognition, and verify the recognition rate with it. Also, we compared the vehicle recognition rate for the occluded area using the SNMF(sparse NMF) which has basis vectors with constraint and LVQ2 neural network. We showed that the feature through the proposed NMF is robust in the urban scene where occlusions are frequently occur.

Rotation-Invariant Iris Recognition Method Based on Zernike Moments (Zernike 모멘트 기반의 회전 불변 홍채 인식)

  • Choi, Chang-Soo;Seo, Jeong-Man;Jun, Byoung-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.31-40
    • /
    • 2012
  • Iris recognition is a biometric technology which can identify a person using the iris pattern. It is important for the iris recognition system to extract the feature which is invariant to changes in iris patterns. Those changes can be occurred by the influence of lights, changes in the size of the pupil, and head tilting. In this paper, we propose a novel method based on Zernike Moment which is robust to rotations of iris patterns. we utilized a selection of Zernike moments for the fast and effective recognition by selecting global optimum moments and local optimum moments for optimal matching of each iris class. The proposed method enables high-speed feature extraction and feature comparison because it requires no additional processing to obtain the rotation invariance, and shows comparable performance to the well-known previous methods.

Local Prominent Directional Pattern for Gender Recognition of Facial Photographs and Sketches (Local Prominent Directional Pattern을 이용한 얼굴 사진과 스케치 영상 성별인식 방법)

  • Makhmudkhujaev, Farkhod;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.2
    • /
    • pp.91-104
    • /
    • 2019
  • In this paper, we present a novel local descriptor, Local Prominent Directional Pattern (LPDP), to represent the description of facial images for gender recognition purpose. To achieve a clearly discriminative representation of local shape, presented method encodes a target pixel with the prominent directional variations in local structure from an analysis of statistics encompassed in the histogram of such directional variations. Use of the statistical information comes from the observation that a local neighboring region, having an edge going through it, demonstrate similar gradient directions, and hence, the prominent accumulations, accumulated from such gradient directions provide a solid base to represent the shape of that local structure. Unlike the sole use of gradient direction of a target pixel in existing methods, our coding scheme selects prominent edge directions accumulated from more samples (e.g., surrounding neighboring pixels), which, in turn, minimizes the effect of noise by suppressing the noisy accumulations of single or fewer samples. In this way, the presented encoding strategy provides the more discriminative shape of local structures while ensuring robustness to subtle changes such as local noise. We conduct extensive experiments on gender recognition datasets containing a wide range of challenges such as illumination, expression, age, and pose variations as well as sketch images, and observe the better performance of LPDP descriptor against existing local descriptors.

Face Recognition Based on Facial Landmark Feature Descriptor in Unconstrained Environments (비제약적 환경에서 얼굴 주요위치 특징 서술자 기반의 얼굴인식)

  • Kim, Daeok;Hong, Jongkwang;Byun, Hyeran
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.666-673
    • /
    • 2014
  • This paper proposes a scalable face recognition method for unconstrained face databases, and shows a simple experimental result. Existing face recognition research usually has focused on improving the recognition rate in a constrained environment where illumination, face alignment, facial expression, and background is controlled. Therefore, it cannot be applied in unconstrained face databases. The proposed system is face feature extraction algorithm for unconstrained face recognition. First of all, we extract the area that represent the important features(landmarks) in the face, like the eyes, nose, and mouth. Each landmark is represented by a high-dimensional LBP(Local Binary Pattern) histogram feature vector. The multi-scale LBP histogram vector corresponding to a single landmark, becomes a low-dimensional face feature vector through the feature reduction process, PCA(Principal Component Analysis) and LDA(Linear Discriminant Analysis). We use the Rank acquisition method and Precision at k(p@k) performance verification method for verifying the face recognition performance of the low-dimensional face feature by the proposed algorithm. To generate the experimental results of face recognition we used the FERET, LFW and PubFig83 database. The face recognition system using the proposed algorithm showed a better classification performance over the existing methods.

The Line Feature Extraction for Automatic Cartography Using High Frequency Filters in Remote Sensing : A Case Study of Chinju City (위성영상의 형태추출을 통한 지도화 : 고빈도 공간필터 사용을 중심으로)

  • Jung, In-Chul
    • Journal of the Korean association of regional geographers
    • /
    • v.2 no.2
    • /
    • pp.183-196
    • /
    • 1996
  • The purpose of this paper is to explore the possibility of automatic extraction of line feature from Satellite image. The first part reviews the relationship between spatial filtering and cartographic interpretation. The second part describes the principal operations of high frequency filters and their properties, the third part presents the result of filtering application to the SPOT Panchromatic image of the Chinju city. Some experimental results are given here indicating the high feasibility of the filtering technique. The results of the paper is summarized as follows: Firstly the good all-purposes filter dose not exist. Certain laplacian filter and Frei-chen filter were very sensitive to the noise and could not detect line features in our case. Secondly, summary filters and some other filters do an excellent job of identifying edges around urban objects. With the filtered image added to the original image, the interpretation is more easy. Thirdly, Compass gradient masks may be used to perform two-dimensional, discrete differentiation directional edge enhancement, however, in our case, the line featuring was not satisfactory. In general, the wide masks detect the broad edges and narrow masks are used to detect the sharper discontinuities. But, in our case, the difference between the $3{\times}3$ and $7{\times}7$ kernel filters are not remarkable. It may be due to the good spatial resolution of Spot scene. The filtering effect depends on local circumstance. Band or kernel size selection must be also considered. For the skillful geographical interpretation, we need to take account the more subtle qualitative information.

  • PDF

Image Based Text Matching Using Local Crowdedness and Hausdorff Distance (지역 밀집도 및 Hausdorff 거리를 이용한 영상기반 텍스트 매칭)

  • Son, Hwa-Jeong;Kim, Ji-Soo;Park, Mi-Seon;Yoo, Jae-Myeong;Kim, Soo-Hyung
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.10
    • /
    • pp.134-142
    • /
    • 2006
  • In this paper, we investigate a Hausdorff distance, which is used for the measurement of image similarity, to see whether it is also effective for document retrieval. The proposed method uses a local crowdedness and a Hausdorff distance to locate text images by determining whether a pair of images scanned at different time comes from the same text or not. To reduce the processing time, which is one of the disadvantages of a Hausdorff distance algorithm, we adopt a local crowdedness for feature point extraction. We apply the proposed method to 190 pairs of the same class and 190 pairs of the different class collected from postal envelop images. The results show that the modified Hausdorff distance proposed in this paper performed well in locating the tort region and calculating the degree of similarity between two images. An improvement of accuracy by 2.7% and 9.0% has been obtained, compared to a binary correlation method and the original Hausdorff distance method, respectively.

  • PDF

Hardware Design of SURF-based Feature extraction and description for Object Tracking (객체 추적을 위한 SURF 기반 특이점 추출 및 서술자 생성의 하드웨어 설계)

  • Do, Yong-Sig;Jeong, Yong-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.83-93
    • /
    • 2013
  • Recently, the SURF algorithm, which is conjugated for object tracking system as part of many computer vision applications, is a well-known scale- and rotation-invariant feature detection algorithm. The SURF, due to its high computational complexity, there is essential to develop a hardware accelerator in order to be used on an IP in embedded environment. However, the SURF requires a huge local memory, causing many problems that increase the chip size and decrease the value of IP in ASIC and SoC system design. In this paper, we proposed a way to design a SURF algorithm in hardware with greatly reduced local memory by partitioning the algorithms into several Sub-IPs using external memory and a DMA. To justify validity of the proposed method, we developed an example of simplified object tracking algorithm. The execution speed of the hardware IP was about 31 frame/sec, the logic size was about 74Kgate in the 30nm technology with 81Kbytes local memory in the embedded system platform consisting of ARM Cortex-M0 processor, AMBA bus(AHB-lite and APB), DMA and a SDRAM controller. Hence, it can be used to the hardware IP of SoC Chip. If the image processing algorithm akin to SURF is applied to the method proposed in this paper, it is expected that it can implement an efficient hardware design for target application.

BoF based Action Recognition using Spatio-Temporal 2D Descriptor (시공간 2D 특징 설명자를 사용한 BOF 방식의 동작인식)

  • KIM, JinOk
    • Journal of Internet Computing and Services
    • /
    • v.16 no.3
    • /
    • pp.21-32
    • /
    • 2015
  • Since spatio-temporal local features for video representation have become an important issue of modeless bottom-up approaches in action recognition, various methods for feature extraction and description have been proposed in many papers. In particular, BoF(bag of features) has been promised coherent recognition results. The most important part for BoF is how to represent dynamic information of actions in videos. Most of existing BoF methods consider the video as a spatio-temporal volume and describe neighboring 3D interest points as complex volumetric patches. To simplify these complex 3D methods, this paper proposes a novel method that builds BoF representation as a way to learn 2D interest points directly from video data. The basic idea of proposed method is to gather feature points not only from 2D xy spatial planes of traditional frames, but from the 2D time axis called spatio-temporal frame as well. Such spatial-temporal features are able to capture dynamic information from the action videos and are well-suited to recognize human actions without need of 3D extensions for the feature descriptors. The spatio-temporal BoF approach using SIFT and SURF feature descriptors obtains good recognition rates on a well-known actions recognition dataset. Compared with more sophisticated scheme of 3D based HoG/HoF descriptors, proposed method is easier to compute and simpler to understand.