• Title/Summary/Keyword: Face expression

Search Result 453, Processing Time 0.03 seconds

Behavior-classification of Human Using Fuzzy-classifier (퍼지분류기를 이용한 인간의 행동분류)

  • Kim, Jin-Kyu;Joo, Young-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.12
    • /
    • pp.2314-2318
    • /
    • 2010
  • For human-robot interaction, a robot should recognize the meaning of human behavior. In the case of static behavior such as face expression and sign language, the information contained in a single image is sufficient to deliver the meaning to the robot. In the case of dynamic behavior such as gestures, however, the information of sequential images is required. This paper proposes behavior classification by using fuzzy classifier to deliver the meaning of dynamic behavior to the robot. The proposed method extracts feature points from input images by a skeleton model, generates a vector space from a differential image of the extracted feature points, and uses this information as the learning data for fuzzy classifier. Finally, we show the effectiveness and the feasibility of the proposed method through experiments.

A Study on the Creative Inculturation of Dwelling Space - Based on the Thought of Heidegger′s ″Dwelling - (거주공간의 ′창의적 토착화′에 관한 연구 - 하이데거의 ″거주함″ 사유를 바탕으로 -)

  • 이승헌
    • Korean Institute of Interior Design Journal
    • /
    • v.13 no.3
    • /
    • pp.104-111
    • /
    • 2004
  • The purpose of this paper draws out a theoretical frame for dwelling space from Heidegger's thought, "Dwelling" and analyze Luis Baraggan's housing design as an instance of the practical expression of them. The 'creative inculturation' of dwelling space is possible through familiarity by disclosuring in time and place. Heidegger suggests that place as existential space represents the occasion revelation of incidents in Dasein. He interprets the dwelling as creative openness in which elements comprising this world face and interact with each other into one. Openness may be referred to a dynamic coordination in which the each and the world sustain each other under incessant mutual tension, but not sticking each other. Creative inculturation is determined according to how deeper conversation was made internally. To complete the process properly requires determining several relationships involved in the land area.land area.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

A Hybrid Nonsmooth Nonnegative Matrix Factorization for face representation (다양한 얼굴 표현을 위한 하이브리드 nsNMF 방법)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.957-958
    • /
    • 2008
  • The human facial appearances vary globally and locally according to identity, pose, illumination, and expression variations. In this paper, we propose a hybrid-nonsmooth nonnegative matrix factorization (hybrid-nsNMF) based appearance model to represent various facial appearances which vary globally and locally. Instead of using single smooth matrix in nsNMF, we used two different smooth matrixes and combine them to extract global and local basis at the same time.

  • PDF

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Facial Feature Extraction by using a Genetic Algorithm (유전자 알고리즘을 이용한 얼굴의 특징점 추출)

  • Kim, Sang-Kyoon;Oh, Seung-Ha;Lee, Myoung-Eun;Park, Soon-Young
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1053-1056
    • /
    • 1999
  • In this paper we propose a facial feature extraction method by using a genetic algorithm. The method uses a facial feature template to model the location of eyes and a mouth, and genetic algorithm is employed to find the optimal solution from the fitness function consisting of invariant moments. The simulation results show that the proposed algorithm can effectively extract facial features from face images with variations in position, size, rotation and expression.

  • PDF

The Interchange in Drawing Styles between Cartoon and Fashion illustration (만화와 패션 일러스트레이션의 그림체적 교류)

  • Sung, Kwang-Sook
    • Journal of the Korean Society of Costume
    • /
    • v.59 no.4
    • /
    • pp.82-97
    • /
    • 2009
  • In this study, it can be identified that drawing style of cartoon and fashion illustration are mutually linked and interchanged. The common background of drawing style between cartoon and fashion illustration, is as follows; 1. A means of image communication through mass communication 2. Similarities as visual signs 3. The borderless of painting, illustration and cartoon. 4. Usage of common drawing expressions such as deformation, distortion, exaggeration, metaphor, metonymy. Drawing style interchanging between cartoon and fashion illustration, is as follows; 1. Similar to figure and face are contemporary style, similar figure, Anime style and humourous style. 2. Similar to the way of express is focusing on the line, simplification, mixed computer graphics with hand drawing, artistic expression, the way of multimedia.

The Prediction of Chip Flow Angle on Chip Breaker Shape Parameters (칩브레이커 형상변수에 의한 칩유동각 예측)

  • 박승근
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.381-386
    • /
    • 1999
  • In machining with cutting tool inserts having complex chip groove shape, the flow, curl and breaking patterns of the chip are different than in flat-face type inserts. In the present work, an effort is made to understand the three basic phenomena occurring in a chip since its formation in machining with groove type and pattern type inserts. These are the initial chip flow, the subsequent development of up and side curl and the final chip breaking due to the development of torsional and banding stresses. In this paper, chip flow angle in a groove type and pattern type inserts. The expression for chip flow angle in groove type and pattern type insets is also verified experimentally using high speed filming techniques.

  • PDF

Facial Expression Recognition using the geometric features of the face (얼굴의 기하학적 특징을 이용한 표정 인식)

  • Woo, hyo-jeong;Lee, seul-gi;Kim, dong-woo;Song, Yeong-Jun;Ahn, jae-hyeong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2013.05a
    • /
    • pp.289-290
    • /
    • 2013
  • 이 논문은 얼굴의 기하학적 특징을 이용한 표정인식 시스템을 제안한다. 먼저 얼굴 인식 시스템으로 Haar-like feature의 특징 마스크를 이용한 방법을 적용하였다 인식된 얼굴은 눈을 포함하고 있는 얼굴 상위 부분과 입을 포함하고 있는 얼굴 하위 부분으로 분리한다. 그래서 얼굴 요소 추출에 용이하게 된다. 얼굴 요소 추출은 PCA를 통한 고유 얼굴의 고유 눈과 고유 입의 템플릿 매칭으로 추출하였다. 얼굴 요소는 눈과 입이 있으며 두 요소의 기하학적 특징을 통하여 표정을 인식한다. 눈과 입의 특징 값은 실험을 통하여 정한 각 표정별 임계 값과 비교하여 표정이 인식된다. 본 논문은 기존의 논문에서 거의 사용하지 않는 눈동자의 비율을 적용하여 기존의 표정인식 알고리즘보다 인식률을 높이는 방향으로 제안되었다. 실험결과 기존의 논문보다 인식률이 개선됨을 확인 할 수 있었다.

  • PDF