• Title/Summary/Keyword: 얼굴표정

Search Result 518, Processing Time 0.027 seconds

The Implementation of Web-based Language Learning System for the Hearing Impaired Children Reflecting their Learning Characteristics (청각장애 아동의 언어학습 특성을 반영한 웹 기반 언어학습 시스템의 구현)

  • Keum, Kyung-Ae;Kwon, Oh-Jun;Kim, Tae-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.7 no.4
    • /
    • pp.93-102
    • /
    • 2004
  • For children with hearing impairment, unlike the children without hearing impairment who can reconstruct their languages through the process of hearing and uttering, the inherent mechanism for language acquisition do not operate due to the loss of hearing ability. Therefore, to help hearing-impaired children develop their language ability, web-based language learning system should be constructed depending on the special qualities which the children possess in language learning process. When the system is being designed, it is necessary that words or expressions describing actions or situations be animated and that active situation-based language learning system be constructed to help them develop their power of observation. Moreover, the system needs to be developed through the use of alternative thinking strategy, antonyms, and contrastive words, and emphasis on facial expressions. This paper presents web-based language learning system which is suitable for hearing-impaired children in the way to reduce the grammatical errors they make and to improve their language learning.

  • PDF

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Simultaneous Simplification of Multiple Triangle Meshes for Blend Shape (블렌드쉐입을 위한 다수 삼각 메쉬의 동시 단순화 기법)

  • Park, Jung-Ho;Kim, Jongyong;Song, Jonghun;Park, Sanghun;Yoon, Seung-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.75-83
    • /
    • 2019
  • In this paper we present a new technique for simultaneously simplifying N triangule meshes with the same number of vertices and the same connectivities. Applying the existing simplification technique to each of the N triangule mesh creates a simplified mesh with the same number of vertices but different connectivities. These limits make it difficult to construct a simplified blend-shape model in a high-resolution blend-shape model. The technique presented in this paper takes into account the N meshes simultaneously and performs simplification by selecting an edge with minimal removal cost. Thus, the N simplified meshes generated as a result of the simplification retain the same number of vertices and the same connectivities. The efficiency and effectiveness of the proposed technique is demonstrated by applying simultaneous simplification technique to multiple triangle meshes.

A study on baby face makeup to create a baby face image (동안이미지 연출을 위한 동안 메이크업에 관한 연구)

  • Yong-Shin Kim
    • Journal of the Korean Applied Science and Technology
    • /
    • v.40 no.1
    • /
    • pp.146-159
    • /
    • 2023
  • As a makeup technique for a baby-faced image, there will be a difference in perception of the expression technique of baby-faced makeup according to general matters.' Two hypotheses were supported: 'There will be a difference in perception of the expression technique of baby face makeup depending on the general characteristics', and the makeup technique for creating a baby face image is an important function for both men and women, as well as appearance. As a 'physical resource' for social activities, it was confirmed that there is an improvement in the efficiency of the body and mind and an outstanding improvement in mental ability in daily life. Through the results of the study on 'expression of baby face image makeup', awareness and interest in baby face images are high, but research on the production of baby face images is needed. The need for facial expression elements for baby face makeup is expected to be used as basic data for developing baby face images, and this study focuses on external face management for baby face images and baby face makeup.

Implicit Self-anxious and Self-depressive Associations among College Students with Posttraumatic Stress Symptoms (외상 경험자의 암묵적 자기-불안 및 자기-우울의 연합)

  • Yun Kyeung, Choi;Jae Ho, Lee
    • Korean Journal of Culture and Social Issue
    • /
    • v.24 no.3
    • /
    • pp.451-472
    • /
    • 2018
  • The purpose of this study was to examine implicit associations of negative emotion (i.e. anxiety and depression) and self among a college students having experienced posttraumatic stress symptoms. The participants were 61 college students(male 16, female 45). They were classified into two groups, trauma group(n=35) and control group(n=26) according to scores of Korean version of Impact of Events Scale-Revised. Two groups were compared with regard to automatic self-anxious and self-depressive associations measured with the Implicit Association Test using both words and facial expression pictures, respectively. As results, trauma group showed more enhanced self-anxious association in the words conditions, and stronger self-anxious and self-depressive associations in the pictures conditions than control group, whereas there were no significant differences between two groups in explicit cognition and depression. These results suggest that traumatic experiences could influence self-concepts in the automatic process. Limitations of the current study and suggestions for future research were discussed.

Emotion fusion video communication services for real-time avatar matching technology (영상통신 감성융합 서비스를 위한 실시간 아바타 정합기술)

  • Oh, Dong Sik;Kang, Jun Ku;Sin, Min Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.283-288
    • /
    • 2012
  • 3D is the one of the current world in the spotlight as part of the future earnings of the business sector. Existing flat 2D and stereoscopic 3D to change the 3D shape and texture make walking along the dimension of the real world and the virtual reality world by making it feel contemporary reality of coexistence good show. 3D for the interest of the people has been spreading throughout the movie which is based on a 3D Avata. 3D TV market of the current conglomerate of changes in the market pioneer in the 3D market, further leap into the era of the upgrade was. At the same time, however, the modern man of the world, if becoming a necessity in the smartphone craze new innovation in the IT market mobile phone market and also has made. A small computer called a smartphone enough, the ripple velocity and the aftermath of the innovation of the telephone, the Internet as much as to leave many issues. Smartphone smart phone is a mobile phone that can be several functions. The current iPhone, Android. In addition to a large number of Windows Phone smartphones are released. Above the overall prospects of the future and a business service model for 3D facial expression as input avatar virtual 3D character on camera on your smartphone camera to recognize a user's emotional expressions on the face of the person is able to synthetic synthesized avatars in real-time to other mobile phone users matching, transmission, and be able to communicate in real-time sensibility fused video communication services to the development of applications.

Development of Emotion Recognition and Expression module with Speech Signal for Entertainment Robot (엔터테인먼트 로봇을 위한 음성으로부터 감정 인식 및 표현 모듈 개발)

  • Mun, Byeong-Hyeon;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.82-85
    • /
    • 2007
  • 현재 가정을 비롯한 여러 분야에서 서비스 로봇(청소 로봇, 애완용 로봇, 멀티미디어 로봇 둥)의 사용이 증가하고 있는 시장상황을 보이고 있다. 개인용 서비스 로봇은 인간 친화적 특성을 가져야 그 선호도가 높아질 수 있는데 이를 위해서 사용자의 감정 인식 및 표현 기술은 필수적인 요소이다. 사람들의 감정 인식을 위해 많은 연구자들은 음성, 사람의 얼굴 표정, 생체신호, 제스쳐를 통해서 사람들의 감정 인식을 하고 있다. 특히, 음성을 인식하고 적용하는 것에 관한 연구가 활발히 진행되고 있다. 본 논문은 감정 인식 시스템을 두 가지 방법으로 제안하였다. 현재 많이 개발 되어지고 있는 음성인식 모듈을 사용하여 단어별 감정을 분류하여 감정 표현 시스템에 적용하는 것과 마이크로폰을 통해 습득된 음성신호로부터 특정들을 검출하여 Bayesian Learning(BL)을 적용시켜 normal, happy, sad, surprise, anger 등 5가지의 감정 상태로 패턴 분류를 한 후 이것을 동적 감정 표현 알고리즘의 입력값으로 하여 dynamic emotion space에 사람의 감정을 표현할 수 있는 ARM 플랫폼 기반의 음성 인식 및 감정 표현 시스템 제안한 것이다.

  • PDF

Development of Gamsung Index Database for Web (웹 기반 감성지표 DB 구축에 관한 연구)

  • 서해성;최정아;최경주;이혜은;박수찬;채균식;이상태
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.05a
    • /
    • pp.204-209
    • /
    • 2001
  • 감성 연구의 생리, 심리, 동작 분석 등의 과정에서 발생하는 문서, 데이터, 화상, 그림, 그래픽 등의 자료 공유에 인터넷을 적용하는 것은 매우 유망하다, 본 연구에서는 현재의 인터넷 (또는 인트라넷) 환경을 고려하여 Ultra 2 DB 서버, Sun Solaris (UNIX) OS, UniSql DBMS, Apache Web 서버로 시스템을 구축하였다. 이 시스템을 이용하여 웹기반 감성데이터 베이스 시제품을 구축하여 각종 감성자료를 보급하는 체계를 1차 완료하였으며, 도메인(www.gamsung.or.kr)을 등록하여 서비스를 실시하고 있다. 이 시스템에는 1단계 감성공학기술개발 사업의 35개 연구보고서, 1단계 연구결과로서 생산한 262개 감성지표, 2단계(전반기) 연구사업의 78개 감성지표, 일본의 인간감각계측 및 응용기술개발 중심으로 생산하나 100개 감성지표, 음성데이터, 인체측정데이터, 얼굴표정데이터와 설문데이터 등의 감성데이터와 감성공학 연구자 인력, 국내제조업체 정보, 참고문헌, 감성 정보물 등을 DB화하여 인터넷을 통하여 조회, 검색할 수 있도록 하였다. 뿐만 아니라 현재 진행되고 있는 감성공학기술개발에 대한 각종 정보를 제공하고 있으며 감성공학의 기술 동향에 대해서도 소개하고 있다. 이는 한국인의 감성 연구에 적합한 감성공학 데이터 베이스 시스템 구축의 발판을 마련하였으며, 향후 소비자의 감성을 예측하고, 이를 시스템 설계에 반영할 수 있으므로 인간중심의 제품개발에 큰 기여를 할 것으로 기대된다.

The Influence of Nonconscious Affective Priming on Object Rating (의식되지 않는 정서 점화자극이 대상의 호감도에 미치는 영향)

  • 이수정
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.4
    • /
    • pp.11-25
    • /
    • 1999
  • The affective primacy hypothesis of Murphy와 Zajonc(1983) was replicated in this study. The results of experiment 1 and 2 expanded the affective priming effect by facial valence as well as affective events on object rating. Experiment 3 explored the affective priming effect of schizophrenics at the supraliminal level and their results compared with those of normal subjects. For normal subjects the effect of affective priming was found only at the subliminal level but schizophrenics showed the assimilation effects by affective priming even at the supraliminal level. Finally. principles of affective processing were discussed.

  • PDF

A Study on the Emoticon Extraction based on Facial Expression Recognition using Deep Learning Technique (딥 러닝 기술 이용한 얼굴 표정 인식에 따른 이모티콘 추출 연구)

  • Jeong, Bong-Jae;Zhang, Fan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.2
    • /
    • pp.43-53
    • /
    • 2017
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the emoticons that users often use, you can identify facial expressions with acamera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, in order to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar e xpressions, reached 66%.It doesn't need to search for emoticons. If you use the camera to recognize the expression, itwill appear emoticons immediately. So this service is the emoticons used when people send messages to others, and it can feel a lot of convenience. In countless emoticons, there is no need to find emoticons, which is an increasing trend in deep learning. So we need to use more suitable algorithm for expression recognition, and then improve accuracy.