• Title/Summary/Keyword: Feature point extraction

Search Result 265, Processing Time 0.029 seconds

A Novel Face Recognition Algorithm based on the Deep Convolution Neural Network and Key Points Detection Jointed Local Binary Pattern Methodology

  • Huang, Wen-zhun;Zhang, Shan-wen
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.363-372
    • /
    • 2017
  • This paper presents a novel face recognition algorithm based on the deep convolution neural network and key point detection jointed local binary pattern methodology to enhance the accuracy of face recognition. We firstly propose the modified face key feature point location detection method to enhance the traditional localization algorithm to better pre-process the original face images. We put forward the grey information and the color information with combination of a composite model of local information. Then, we optimize the multi-layer network structure deep learning algorithm using the Fisher criterion as reference to adjust the network structure more accurately. Furthermore, we modify the local binary pattern texture description operator and combine it with the neural network to overcome drawbacks that deep neural network could not learn to face image and the local characteristics. Simulation results demonstrate that the proposed algorithm obtains stronger robustness and feasibility compared with the other state-of-the-art algorithms. The proposed algorithm also provides the novel paradigm for the application of deep learning in the field of face recognition which sets the milestone for further research.

Handwritten Numerals Recognition Using an Ant-Miner Algorithm

  • Phokharatkul, Pisit;Phaiboon, Supachai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1031-1033
    • /
    • 2005
  • This paper presents a system of handwritten numerals recognition, which is based on Ant-miner algorithm (data mining based on Ant colony optimization). At the beginning, three distinct fractures (also called attributes) of each numeral are extracted. The attributes are Loop zones, End points, and Feature codes. After these data are extracted, the attributes are in the form of attribute = value (eg. End point10 = true). The extraction is started by dividing the numeral into 12 zones. The numbers 1-12 are referenced for each zone. The possible values of Loop zone attribute in each zone are "true" and "false". The meaning of "true" is that the zone contains the loop of the numeral. The Endpoint attribute being "true" means that this zone contains the end point of the numeral. There are 24 attributes now. The Feature code attribute tells us how many lines of a numeral are passed by the referenced line. There are 7 referenced lines used in this experiment. The total attributes are 31. All attributes are used for construction of the classification rules by the Ant-miner algorithm in order to classify 10 numerals. The Ant-miner algorithm is adapted with a little change in this experiment for a better recognition rate. The results showed the system can recognize all of the training set (a thousand items of data from 50 people). When the unseen data is tested from 10 people, the recognition rate is 98 %.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Feature Extraction and Fusion for land-Cover Discrimination with Multi-Temporal SAR Data (다중 시기 SAR 자료를 이용한 토지 피복 구분을 위한 특징 추출과 융합)

  • Park No-Wook;Lee Hoonyol;Chi Kwang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.145-162
    • /
    • 2005
  • To improve the accuracy of land-cover discrimination in SAB data classification, this paper presents a methodology that includes feature extraction and fusion steps with multi-temporal SAR data. Three features including average backscattering coefficient, temporal variability and coherence are extracted from multi-temporal SAR data by considering the temporal behaviors of backscattering characteristics of SAR sensors. Dempster-Shafer theory of evidence(D-S theory) and fuzzy logic are applied to effectively integrate those features. Especially, a feature-driven heuristic approach to mass function assignment in D-S theory is applied and various fuzzy combination operators are tested in fuzzy logic fusion. As experimental results on a multi-temporal Radarsat-1 data set, the features considered in this paper could provide complementary information and thus effectively discriminated water, paddy and urban areas. However, it was difficult to discriminate forest and dry fields. From an information fusion methodological point of view, the D-S theory and fuzzy combination operators except the fuzzy Max and Algebraic Sum operators showed similar land-cover accuracy statistics.

The Corridor Scene Analysis for a Motorized Wheelchair's Automatic Locomotion (전동휠체어 자동 주행을 위한 복도영상 해석)

  • Moon, Cheol-Hong;Han, Yeong-Hwan;Hong, Seung-Hong
    • Journal of Biomedical Engineering Research
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 1994
  • In this paper. a way to analysis a corridor scene for a vehicle's automatic locomotion is presented In general, it's necessary for a vision system of vehicles to identify its positions in given environments. The suggested algorithm is to decide base lines of a corridor image using the vanishing point finding. Feature points are extracted on the base line using a base line extraction tree. This algorithm is suitable for a motorized wheelchair's self locomotion in a building.

  • PDF

Development of Merging Algorithm between 3-D Objects and Real Image for Augmented Reality

  • Kang, Dong-Joong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.5-100
    • /
    • 2002
  • A core technology for implementation of Augmented Reality is to develop a merging algorithm between interesting 3-D objects and real images. In this paper, we present a 3-D object recognition method to decide viewing direction toward the object from camera. This process is the starting point to merge with real image and 3-D objects. Perspective projection between a camera and 3-dimentional objects defines a plane in 3-D space that is from a line in an image and the focal point of the camera. If no errors with perfect 3-D models were introduced in during image feature extraction, then model lines in 3-D space projecting onto this line in the image would exactly lie in this plane. This observa...

  • PDF

Real-time Lane Violation Detection System using Feature Tracking (특징점 추적을 이용한 실시간 끼어들기 위반차량 검지 시스템)

  • Lee, Hee-Sin;Jeong, Sung-Hwan;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.201-212
    • /
    • 2011
  • In this paper, we suggest a system of detecting a vehicle with lane violation, which can detect the vehicle with lane violation, by using the feature point tracking. The whole algorism in the suggested system of detecting a vehicle with lane violation is composed of three stages such as feature extraction, register and tracking in feature for the tracking-targeted vehicle, and detecting a vehicle with lane violation. The feature is extracted from the morphological gradient image, which results in constructing robust detection system against shadows, weather conditions, head lights and illumination conditions without distinction day and night. The system shows excellent performance for the data captured at day time, night time, and rainy night time as much as 99.49% for positive recognition ratio and 0.51% for error ratio. Also the system is so fast as much as 91.34 frames per second in average that it may be possible for real-time processing.

The Target Detection and Classification Method Using SURF Feature Points and Image Displacement in Infrared Images (적외선 영상에서 변위추정 및 SURF 특징을 이용한 표적 탐지 분류 기법)

  • Kim, Jae-Hyup;Choi, Bong-Joon;Chun, Seung-Woo;Lee, Jong-Min;Moon, Young-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.43-52
    • /
    • 2014
  • In this paper, we propose the target detection method using image displacement, and classification method using SURF(Speeded Up Robust Features) feature points and BAS(Beam Angle Statistics) in infrared images. The SURF method that is a typical correspondence matching method in the area of image processing has been widely used, because it is significantly faster than the SIFT(Scale Invariant Feature Transform) method, and produces a similar performance. In addition, in most SURF based object recognition method, it consists of feature point extraction and matching process. In proposed method, it detects the target area using the displacement, and target classification is performed by using the geometry of SURF feature points. The proposed method was applied to the unmanned target detection/recognition system. The experimental results in virtual images and real images, we have approximately 73~85% of the classification performance.

Study on R-peak Detection Algorithm of Arrhythmia Patients in ECG (심전도 신호에서 부정맥 환자의 R파 검출 알고리즘 연구)

  • Ahn, Se-Jong;Lim, Chang-Joo;Kim, Yong-Gwon;Chung, Sung-Taek
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.10
    • /
    • pp.4443-4449
    • /
    • 2011
  • ECG consists of various types of electrical signal on the heart, and feature point of these signals can be detected by analyzing the arrhythmia. So far, feature points extraction method for the detection of arrhythmia done in the many studies. However, it is not suitable for portable device using real time operation due to complicated operation. In this paper, R-peak were extracted using R-R interval and QRS width informations on patients. First, noise of low frequency bands eliminated using butterworth filter, and the R-peak was extracted by R-R interval moving average and QRS width moving average. In order to verify, it was experimented to compare the R-peak of data in MIT-BIH arrhythmia database and the R-peak of suggested algorithm. As a results, it showed an excellent detection for feature point of R-peak, even during the process of operation could be efficient way to confirm.

Evaluation on Tie Point Extraction Methods of WorldView-2 Stereo Images to Analyze Height Information of Buildings (건물의 높이 정보 분석을 위한 WorldView-2 스테레오 영상의 정합점 추출방법 평가)

  • Yeji, Kim;Yongil, Kim
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.407-414
    • /
    • 2015
  • Interest points are generally located at the pixels where height changes occur. So, interest points can be the significant pixels for DSM generation, and these have the important role to generate accurate and reliable matching results. Manual operation is widely used to extract the interest points and to match stereo satellite images using these for generating height information, but it causes economic and time consuming problems. Thus, a tie point extraction method using Harris-affine technique and SIFT(Scale Invariant Feature Transform) descriptors was suggested to analyze height information of buildings in this study. Interest points on buildings were extracted by Harris-affine technique, and tie points were collected efficiently by SIFT descriptors, which is invariant for scale. Searching window for each interest points was used, and direction of tie points pairs were considered for more efficient tie point extraction method. Tie point pairs estimated by proposed method was used to analyze height information of buildings. The result had RMSE values less than 2m comparing to the height information estimated by manual method.