• Title/Summary/Keyword: Appearance-Based Recognition

Search Result 147, Processing Time 0.022 seconds

A Study on A Biometric Bits Extraction Method of A Cancelable face Template based on A Helper Data (보조정보에 기반한 가변 얼굴템플릿의 이진화 방법의 연구)

  • Lee, Hyung-Gu;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.83-90
    • /
    • 2010
  • Cancelable biometrics is a robust and secure biometric recognition method using revocable biometric template in order to prevent possible compromisation of the original biometric data. In this paper, we present a new cancelable bits extraction method for the facial data. We use our previous cancelable feature template for the bits extraction. The adopted cancelable template is generated from two different original face feature vectors extracted from two different appearance-based approaches. Each element of feature vectors is re-ordered, and the scrambled features are added. With the added feature, biometric bits string is extracted using helper data based method. In this technique, helper data is generated using statistical property of the added feature vector, which can be easily replaced with straightforward revocation. Because, the helper data only utilizes partial information of the added feature, our proposed method is a more secure method than our previous one. The proposed method utilizes the helper data to reduce feature variance within the same individual and increase the distinctiveness of bit strings of different individuals for good recognition performance. For a security evaluation of our proposed method, a scenario in which the system is compromised by an adversary is also considered. In our experiments, we analyze the proposed method with respect to performance and security using the extended YALEB face database

Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction (물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식)

  • Yoon, Sung-baek;Bae, Se-ho;Park, Han-je;Yi, June-ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2284-2290
    • /
    • 2015
  • The use of human actions as context for object class recognition is quite effective in enhancing the recognition performance despite the large variation in the appearance of objects. We propose an efficient method that integrates human action information into object class recognition using a Bayesian appraoch based on a simple probabilistic graph model. The experiment shows that by using human actions ac context information we can improve the performance of the object calss recognition from 8% to 28%.

Towards Effective Entity Extraction of Scientific Documents using Discriminative Linguistic Features

  • Hwang, Sangwon;Hong, Jang-Eui;Nam, Young-Kwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1639-1658
    • /
    • 2019
  • Named entity recognition (NER) is an important technique for improving the performance of data mining and big data analytics. In previous studies, NER systems have been employed to identify named-entities using statistical methods based on prior information or linguistic features; however, such methods are limited in that they are unable to recognize unregistered or unlearned objects. In this paper, a method is proposed to extract objects, such as technologies, theories, or person names, by analyzing the collocation relationship between certain words that simultaneously appear around specific words in the abstracts of academic journals. The method is executed as follows. First, the data is preprocessed using data cleaning and sentence detection to separate the text into single sentences. Then, part-of-speech (POS) tagging is applied to the individual sentences. After this, the appearance and collocation information of the other POS tags is analyzed, excluding the entity candidates, such as nouns. Finally, an entity recognition model is created based on analyzing and classifying the information in the sentences.

Face Recognition System Based on the Embedded LINUX (임베디드 리눅스 기반의 눈 영역 비교법을 이용한 얼굴인식)

  • Bae, Eun-Dae;Kim, Seok-Min;Nam, Boo-Hee
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.120-121
    • /
    • 2006
  • In this paper, We have designed a face recognition system based on the embedded Linux. This paper has an aim in embedded system to recognize the face more exactly. At first, the contrast of the face image is adjusted with lightening compensation method, the skin and lip color is founded based on YCbCr values from the compensated image. To take advantage of the method based on feature and appearance, these methods are applied to the eyes which has the most highly recognition rate of all the part of the human face. For eyes detecting, which is the most important component of the face recognition, we calculate the horizontal gradient of the face image and the maximum value. This part of the face is resized for fitting the eye image. The image, which is resized for fit to the eye image stored to be compared, is extracted to be the feature vectors using the continuous wavelet transform and these vectors are decided to be whether the same person or not with PNN, to miminize the error rate, the accuracy is analyzed due to the rotation or movement of the face. Also last part of this paper we represent many cases to prove the algorithm contains the feature vector extraction and accuracy of the comparison method.

  • PDF

Greedy Learning of Sparse Eigenfaces for Face Recognition and Tracking

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.162-170
    • /
    • 2014
  • Appearance-based subspace models such as eigenfaces have been widely recognized as one of the most successful approaches to face recognition and tracking. The success of eigenfaces mainly has its origins in the benefits offered by principal component analysis (PCA), the representational power of the underlying generative process for high-dimensional noisy facial image data. The sparse extension of PCA (SPCA) has recently received significant attention in the research community. SPCA functions by imposing sparseness constraints on the eigenvectors, a technique that has been shown to yield more robust solutions in many applications. However, when SPCA is applied to facial images, the time and space complexity of PCA learning becomes a critical issue (e.g., real-time tracking). In this paper, we propose a very fast and scalable greedy forward selection algorithm for SPCA. Unlike a recent semidefinite program-relaxation method that suffers from complex optimization, our approach can process several thousands of data dimensions in reasonable time with little accuracy loss. The effectiveness of our proposed method was demonstrated on real-world face recognition and tracking datasets.

FTSnet: A Simple Convolutional Neural Networks for Action Recognition (FTSnet: 동작 인식을 위한 간단한 합성곱 신경망)

  • Zhao, Yulan;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.878-879
    • /
    • 2021
  • Most state-of-the-art CNNs for action recognition are based on a two-stream architecture: RGB frames stream represents the appearance and the optical flow stream interprets the motion of action. However, the cost of optical flow computation is very high and then it increases action recognition latency. We introduce a design strategy for action recognition inspired by a two-stream network and teacher-student architecture. There are two sub-networks in our neural networks, the optical flow sub-network as a teacher and the RGB frames sub-network as a student. In the training stage, we distill the feature from the teacher as a baseline to train student sub-network. In the test stage, we only use the student so that the latency reduces without computing optical flow. Our experiments show that its advantages over two-stream architecture in both speed and performance.

Multi-Cattle tracking with appearance and motion models in closed barns using deep learning

  • Han, Shujie;Fuentes, Alvaro;Yoon, Sook;Park, Jongbin;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.84-92
    • /
    • 2022
  • Precision livestock monitoring promises greater management efficiency for farmers and higher welfare standards for animals. Recent studies on video-based animal activity recognition and tracking have shown promising solutions for understanding animal behavior. To achieve that, surveillance cameras are installed diagonally above the barn in a typical cattle farm setup to monitor animals constantly. Under these circumstances, tracking individuals requires addressing challenges such as occlusion and visual appearance, which are the main reasons for track breakage and increased misidentification of animals. This paper presents a framework for multi-cattle tracking in closed barns with appearance and motion models. To overcome the above challenges, we modify the DeepSORT algorithm to achieve higher tracking accuracy by three contributions. First, we reduce the weight of appearance information. Second, we use an Ensemble Kalman Filter to predict the random motion information of cattle. Third, we propose a supplementary matching algorithm that compares the absolute cattle position in the barn to reassign lost tracks. The main idea of the matching algorithm assumes that the number of cattle is fixed in the barn, so the edge of the barn is where new trajectories are most likely to emerge. Experimental results are performed on our dataset collected on two cattle farms. Our algorithm achieves 70.37%, 77.39%, and 81.74% performance on HOTA, AssA, and IDF1, representing an improvement of 1.53%, 4.17%, and 0.96%, respectively, compared to the original method.

Robust Face Recognition System using AAM and Gabor Feature Vectors (AAM과 가버 특징 벡터를 이용한 강인한 얼굴 인식 시스템)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Jeon, Seoung-Seon;Kim, Jae-Min;Cho, Seong-Won;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.1-10
    • /
    • 2007
  • In this paper, we propose a face recognition system using AAM and Gabor feature vectors. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization of facial feature points employed in EBGM is based on Gator jet similarity and is sensitive to initial points. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we propose a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based localization method with initial points set by the facial feature points estimated from AAM, and propose a face recognition system based on the proposed localization method. It is verified through experiments that the proposed face recognition system using the combined localization performs better than the conventional face recognition system using the Gabor similarity-based localization only like EBGM.

Learning Similarity between Hand-posture and Structure for View-invariant Hand-posture Recognition (관측 시점에 강인한 손 모양 인식을 위한 손 모양과 손 구조 사이의 학습 기반 유사도 결정 방법)

  • Jang Hyo-Young;Jung Jin-Woo;Bien Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.271-274
    • /
    • 2006
  • This paper deals with a similarity decision method between the shape of hand-postures and their structures to improve performance of the vision-based hand-posture recognition system. Hand-posture recognition by vision sensors has difficulties since the human hand is an object with high degrees of freedom, and hence grabbed images present complex self-occlusion effects and, even for one hand-posture, various appearances according to viewing directions. Therefore many approaches limit the relative angle between cameras and hands or use multiple cameras. The former approach, however, restricts user's operation area. The latter requires additional considerations on the way of merging the results from each camera image to get the final recognition result. To recognize hand-postures, we use both of appearance and structural features and decide the similarity between the two types of features by learning.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.