• Title/Summary/Keyword: Image Expression

Search Result 1,342, Processing Time 0.024 seconds

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

A Study on Space Expression According to the Production Characteristics of Reflex Media (영상미디어 연출 특성에 따른 공간 표현에 관한 연구)

  • Yoo Jae-Yeup
    • Korean Institute of Interior Design Journal
    • /
    • v.13 no.6
    • /
    • pp.175-183
    • /
    • 2004
  • As the information-oriented society makes progress, the role of Image has much influence on human being in space as a medium for information delivery and a means of artistic communication. These influences are appeared as expressional characteristics of the image such as the reproducibility of reality and unreality in the real world, the synchrony of expression of time, visual formality, a sign and the transmission of moaning. For these, the investigator examined the meaning of image in aspace, taking into consideration of the interrelationship of image, space, and human being. As study findings show, the expressional characteristics of image in space have such visual effects as a space in which pictorial formality and object exists, in which the mutual understanding of communication exists, and that realizes immaterial membrane in the aspect of time and space, according to the electronic light, color, and formation of the image media. In addition, it become clear that the characteristic could be staged on various circumstances by constructing the relationship between an object and a point of time interactively with bidirectional communication through combining technology and art. This suggests that the image develops as the form of sensory communication via interacting of space and human being.

The Aspects of Sex Identity Expression in Contemporary Mens Fashion (현대 남성패션에 나타난 성 정체성의 표현양상)

  • 송명진;채금석
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.25 no.2
    • /
    • pp.327-338
    • /
    • 2001
  • The purpose of this thesis is to study the aspects of sex identity in the contemporary mens fashion expressed through sexual image and taste in the half of twentieth century. The aspects of sex identity expression in the contemporary mens fashion can be classified by image, that is, homosexual, heroic, bisexual, and fetish. 1. The homosexual image has shown the tendency to emphasize the masculinity since 1950.60s. It can be found in \"Cowboys costume\" which is typical of American traditional fashion, and jeans and underwear fashion expressed by muscular men has homosexual characteristics which contain narcissism. 2. Based on mens traditional gender role, the heroic image emphasizes mens physical characteristics and expresses tough and offensive masculine beauty in mens suit which is free from the authority and formality. 3. The bisexual image denied the division of gender role by costume and destroyed the traditional sex model by resolutely applying womens costume such as skirts to mens fashion. 4. The fetish image is similar to bisexual image in that they wear womens costume, but different in that it expresses sexual desire or fantasy. It is expressed through brilliant color, leather and metal ornaments, and sensual element of women which emphasizes \"body\". This shows the sex identity of contemporary men who want more sensible and free life.sible and free life.

  • PDF

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Relationships between preferences of sensibility expression factors for utilized fabrics and preferences of fashion images (패션소재의 감성표현요소 선호도와 패션이미지 선호도의 관련성)

  • Kim, Yeo Won;Park, Yong;Choi, Jong Myoung
    • The Research Journal of the Costume Culture
    • /
    • v.24 no.1
    • /
    • pp.27-40
    • /
    • 2016
  • This study investigated the preference of sensibility expression factors regarding fashion materials, such as the color, pattern and texture of fabric. Moreover, this study analyzed the relationship between the preference of sensibility expression factors and the preference of fashion images by identifying the preference of fashion images. The survey subjects were 312 women ranging in age from 20 to 40 years old. This study utilized a questionnaire as a measurement tool. First, this study performed a factorial analysis on the preference of sensibility expression factors of fashion materials. In regards to color preference, this study considered color depth such as light tone color, moderate tone color, dark tone color and vivid tone color. In regards to pattern preference, this study examined: geometric pattern, floral pattern, animal skins pattern, check pattern and symbolical pattern. In regard to preference of the texture, this study assessed: roughness, luster, flatness and lightness. Second, this study performed a factorial analysis on the preference of fashion images. This study examined five factors: dignity, uniqueness, femininity, activity and simplicity. Third, this study analyzed the effects of the preference of sensibility expression factors of fashion materials on the preference of fashion images. As a result, the color preference was related to the image preference associated with dignity, femininity and simplicity, whereas the pattern preference was related to the images of uniqueness, femininity, activity and simplicity. Moreover, the preference of texture image was related to the images of dignity, uniqueness, femininity and activity.

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

An Algorithm of the Sketch Effect on an Image (영상의 스케치 효과 알고리즘)

  • 김봉민;김진헌
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.11a
    • /
    • pp.163-166
    • /
    • 2000
  • This paper presents an image processing algorithm which gives an sketch effect on the real image taken by CCD camera. The essence of the technique is the expression of shading by the line touch. Several line patterns are developed to emulate pencil drawings, fusains, Indian ink paintings. This algorithm is expected to be used for computer portraits, character development, and souvenirs.

  • PDF