• Title/Summary/Keyword: Facial Expression Space

Search Result 48, Processing Time 0.023 seconds

Design of Seoul Park in Paris (파리 서울공원 설계)

  • 김도경
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.28 no.4
    • /
    • pp.132-137
    • /
    • 2000
  • In June, the City of Seoul held a design competition for $\ulcorner$Seoul Park$\lrcorner$in Paris to promote friendly relations with its sister city. The purpose of this paper is to articulate the design concept of a scheme submitted by the author. The author interpreted the object of this design competition as follows: if we regards a park not as one of urban planning facilities but as a space for expressing a culture, $\ulcorner$Seoul Park$\lrcorner$in Paris is a space expressing Korean culture, or a culture of the City of Seoul in Paris, France. Three points were emphasized in this scheme: 1. Physical and non-physical aspects of Korean culture, or a culture of the City of Seoul were expressed separately. In physical part, a traditional Korean garden was reappeared to express its authenticity compared to its counterpart, French classical garden - its formal and grand style. In nonphysical part, Seoul's features and its citizen's facial expression were engraved on 'free standing walls' named 'Seoul Expression'. In addition, Korean traditional and modern performing arts will be performed in a square named as 'Seoul madang' surrounded by the free standing walls. 2. A space clearly divided by the fence was necessary to distinguish a traditional Korean garden from the place which looks like an amusement park. Traditional wall, mounding and pine tree groves were included. 3. Bamboo grove with the way taking a walk was introduced. The author expected that Parisian feels oriental mystery, the sound of wind, and the time lag of past and present in this sounding bamboo grove.

  • PDF

Driver's Face Detection Using Space-time Restrained Adaboost Method

  • Liu, Tong;Xie, Jianbin;Yan, Wei;Li, Peiqin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2341-2350
    • /
    • 2012
  • Face detection is the first step of vision-based driver fatigue detection method. Traditional face detection methods have problems of high false-detection rates and long detection times. A space-time restrained Adaboost method is presented in this paper that resolves these problems. Firstly, the possible position of a driver's face in a video frame is measured relative to the previous frame. Secondly, a space-time restriction strategy is designed to restrain the detection window and scale of the Adaboost method to reduce time consumption and false-detection of face detection. Finally, a face knowledge restriction strategy is designed to confirm that the faces detected by this Adaboost method. Experiments compare the methods and confirm that a driver's face can be detected rapidly and precisely.

Intuitive Quasi-Eigenfaces for Facial Animation (얼굴 애니메이션을 위한 직관적인 유사 고유 얼굴 모델)

  • Kim, Ig-Jae;Ko, Hyeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2006
  • 블렌드 쉐입 기반 얼굴 애니메이션을 위해 기저 모델(Expression basis)을 생성하는 방법을 크게 두 가지로 구분하면, 애니메이터가 직접 모델링을 하여 생성하는 방법과 통계적 방법에 기초하여 모델링하는 방법이 있다. 그 중 애니메이터에 의한 수동 모델링 방법으로 생성된 기저 모델은 직관적으로 표정을 인식할 수 있다는 장점으로 인해 전통적인 키프레임 제어가 가능하다. 하지만, 표정 공간(Expression Space)의 일부분만을 커버하기 때문에 모션데이터로부터의 재복원 과정에서 많은 오차를 가지게 된다. 반면, 통계적 방법을 기반으로 한 기저모델 생성 방법은 거의 모든 표정공간을 커버하는 고유 얼굴 모델(Eigen Faces)을 생성하므로 재복원 과정에서 최소의 오차를 가지지만, 시각적으로 직관적이지 않은 표정 모델을 만들어 낸다. 따라서 본 논문에서는 수동으로 생성한 기저모델을 유사 고유 얼굴 모델(Quasi-Eigen Faces)로 변형하는 방법을 제시하고자 한다. 결과로 생성되는 기저 모델은 시각적으로 직관적인 얼굴 표정을 유지하면서도 통계적 방법에 의한 얼굴표정 공간의 커버 영역과 유사하도록 확장할 수 있다.

  • PDF

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.299-302
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of$.$10 persons show that the proposed method yields high recognition rates.

  • PDF

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.822-826
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of 10 persons show that the proposed method yields high recognition rates.

  • PDF

Representation of Facial Expressions of Different Ages: A Multidimensional Scaling Study (다양한 연령의 얼굴 정서 표상: 다차원척도법 연구)

  • Kim, Jongwan
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • Previous studies using facial expressions have revealed valence and arousal as two core dimensions of affective space. However, it remains unknown if the two dimensional structure is consistent across ages. This study investigated affective dimensions using six facial expressions (angry, disgusted, fearful, happy, neutral, and sad) at three ages (young, middle-aged, and old). Several studies previously required participants to directly rate subjective similarity between facial expression pairs. In this study, we collected indirect measures by asking participants to decide if a pair of two stimuli conveyed the same emotions. Multidimensional scaling showed that "angry-disgusted" and "sad-disgusted" pairs are similar at all three ages. In addition, "angry-sad," "angry-neutral," "neutral-sad," and "disgusted-fearful" pairs were similar at old age. When two faces in a pair reflect the same emotion, "sad" was the most inaccurate in old age, suggesting that the ability to recognize "sad" decreases with old age. This study suggested that the general two-core dimension structure is robust across all age groups with the exception of specific emotions.

Photon-counting linear discriminant analysis for face recognition at a distance

  • Yeom, Seok-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.250-255
    • /
    • 2012
  • Face recognition has wide applications in security and surveillance systems as well as in robot vision and machine interfaces. Conventional challenges in face recognition include pose, illumination, and expression, and face recognition at a distance involves additional challenges because long-distance images are often degraded due to poor focusing and motion blurring. This study investigates the effectiveness of applying photon-counting linear discriminant analysis (Pc-LDA) to face recognition in harsh environments. A related technique, Fisher linear discriminant analysis, has been found to be optimal, but it often suffers from the singularity problem because the number of available training images is generally much smaller than the number of pixels. Pc-LDA, on the other hand, realizes the Fisher criterion in high-dimensional space without any dimensionality reduction. Therefore, it provides more invariant solutions to image recognition under distortion and degradation. Two decision rules are employed: one is based on Euclidean distance; the other, on normalized correlation. In the experiments, the asymptotic equivalence of the photon-counting method to the Fisher method is verified with simulated data. Degraded facial images are employed to demonstrate the robustness of the photon-counting classifier in harsh environments. Four types of blurring point spread functions are applied to the test images in order to simulate long-distance acquisition. The results are compared with those of conventional Eigen face and Fisher face methods. The results indicate that Pc-LDA is better than conventional facial recognition techniques.

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.

Comparison Between Core Affect Dimensional Structures of Different Ages using Representational Similarity Analysis (표상 유사성 분석을 이용한 연령별 얼굴 정서 차원 비교)

  • Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.1
    • /
    • pp.33-42
    • /
    • 2023
  • Previous emotion studies employing facial expressions have focused on the differences between age groups for each of the emotion categories. Instead, Kim (2021) has compared representations of facial expressions in the lower-dimensional emotion space. However, he reported descriptive comparisons without statistical significance testing. This research used representational similarity analysis (Kriegeskorte et al., 2008) to directly compare empirical datasets from young, middle-aged, and old groups and conceptual models. In addition, individual differences multidimensional scaling (Carroll & Chang, 1970) was conducted to explore individual weights on the emotional dimensions for each age group. The results revealed that the old group was the least similar to the other age groups in the empirical datasets and the valence model. In addition, the arousal dimension was the least weighted for the old group compared to the other groups. This study directly tested the differences between the three age groups in terms of empirical datasets, conceptual models, and weights on the emotion dimensions.

Face Recognition using Karhunen-Loeve projection and Elastic Graph Matching (Karhunen-Loeve 근사 방법과 Elastic Graph Matching을 병합한 얼굴 인식)

  • 이형지;이완수;정재호
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.231-234
    • /
    • 2001
  • This paper proposes a face recognition technique that effectively combines elastic graph matching (EGM) and Fisherface algorithm. EGM as one of dynamic lint architecture uses not only face-shape but also the gray information of image, and Fisherface algorithm as a class specific method is robust about variations such as lighting direction and facial expression. In the proposed face recognition adopting the above two methods, the linear projection per node of an image graph reduces dimensionality of labeled graph vector and provides a feature space to be used effectively for the classification. In comparison with a conventional method, the proposed approach could obtain satisfactory results in the perspectives of recognition rates and speeds. Especially, we could get maximum recognition rate of 99.3% by leaving-one-out method for the experiments with the Yale Face Databases.

  • PDF