• Title/Summary/Keyword: Expression Feature

Search Result 529, Processing Time 0.026 seconds

A Study on Secondary School Student's Recognition of Vision-dependent Jump in the Geometry Proof (기하 증명에서 중학생들의 시각의존적 비약 인식에 대한 연구)

  • Kang, JeongGi
    • East Asian mathematical journal
    • /
    • v.30 no.2
    • /
    • pp.223-248
    • /
    • 2014
  • Although a figure expression has a role of mediator in the geometry proof, it is not admitted to prove based on a vision-dependent feature. This study starts from the problem that although a figure expression has an important role in the geometry proof, a lot of students don't understand the limit of vision-dependent feature in the figure expression. We will investigate this problem to understand cognitive characteristic of students. Moreover, we try to get the didactical implications. To do this, we investigate the cognitive ability for a limit of vision-dependent feature, targeting a class of middle school seniors And we will have a personal interview with four students who show a lack of sense of limit of vision-dependent feature in the figure expression and two students for who it is difficult to judge that they don't understand the limit of vision-dependent feature in the figure expression. We will observe and analyzed the cognitive characteristic of six students. Based on the analysis, we will finally discuss on the didactical implications to help students understand the limit of vision-dependent feature in the figure expression.

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • 신영숙
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.11-16
    • /
    • 2003
  • This Paper extracts the edge of main components of face with Gator wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

  • PDF

Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition (얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습)

  • Kang, Hyunwoo;Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

Searching for Optimal Ensemble of Feature-classifier Pairs in Gene Expression Profile using Genetic Algorithm (유전알고리즘을 이용한 유전자발현 데이타상의 특징-분류기쌍 최적 앙상블 탐색)

  • 박찬호;조성배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.525-536
    • /
    • 2004
  • Gene expression profile is numerical data of gene expression level from organism, measured on the microarray. Generally, each specific tissue indicates different expression levels in related genes, so that we can classify disease with gene expression profile. Because all genes are not related to disease, it is needed to select related genes that is called feature selection, and it is needed to classify selected genes properly. This paper Proposes GA based method for searching optimal ensemble of feature-classifier pairs that are composed with seven feature selection methods based on correlation, similarity, and information theory, and six representative classifiers. In experimental results with leave-one-out cross validation on two gene expression Profiles related to cancers, we can find ensembles that produce much superior to all individual feature-classifier fairs for Lymphoma dataset and Colon dataset.

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Emotion Recognition of Facial Expression using the Hybrid Feature Extraction (혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식)

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

New Rectangle Feature Type Selection for Real-time Facial Expression Recognition (실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법)

  • Kim Do Hyoung;An Kwang Ho;Chung Myung Jin;Jung Sung Uk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.2
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.