• Title/Summary/Keyword: Facial State Vector

Search Result 13, Processing Time 0.019 seconds

Robust Facial Expression Recognition Based on Signed Local Directional Pattern (Signed Local Directional Pattern을 이용한 강력한 얼굴 표정인식)

  • Ryu, Byungyong;Kim, Jaemyun;Ahn, Kiok;Song, Gihun;Chae, Oksam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.89-101
    • /
    • 2014
  • In this paper, we proposed a new local micro pattern, Signed Local Directional Pattern(SLDP). SLDP uses information of edges to represent the face's texture. This can produce a more discriminating and efficient code than other state-of-the-art methods. Each micro pattern of SLDP is encoded by sign and its major directions in which maximum edge responses exist-which allows it to distinguish among similar edge patterns that have different intensity transitions. In this paper, we divide the face image into several regions, each of which is used to calculate the distributions of the SLDP codes. Each distribution represents features of the region and these features are concatenated into a feature vector. We carried out facial expression recognition with feature vectors and SVM(Support Vector Machine) on Cohn-Kanade and JAFFE databases. SLDP shows better classification accuracy than other existing methods.

Comparing automated and non-automated machine learning for autism spectrum disorders classification using facial images

  • Elshoky, Basma Ramdan Gamal;Younis, Eman M.G.;Ali, Abdelmgeid Amin;Ibrahim, Osman Ali Sadek
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.613-623
    • /
    • 2022
  • Autism spectrum disorder (ASD) is a developmental disorder associated with cognitive and neurobehavioral disorders. It affects the person's behavior and performance. Autism affects verbal and non-verbal communication in social interactions. Early screening and diagnosis of ASD are essential and helpful for early educational planning and treatment, the provision of family support, and for providing appropriate medical support for the child on time. Thus, developing automated methods for diagnosing ASD is becoming an essential need. Herein, we investigate using various machine learning methods to build predictive models for diagnosing ASD in children using facial images. To achieve this, we used an autistic children dataset containing 2936 facial images of children with autism and typical children. In application, we used classical machine learning methods, such as support vector machine and random forest. In addition to using deep-learning methods, we used a state-of-the-art method, that is, automated machine learning (AutoML). We compared the results obtained from the existing techniques. Consequently, we obtained that AutoML achieved the highest performance of approximately 96% accuracy via the Hyperpot and tree-based pipeline optimization tool optimization. Furthermore, AutoML methods enabled us to easily find the best parameter settings without any human efforts for feature engineering.

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.