• Title/Summary/Keyword: 상호 표정

Search Result 113, Processing Time 0.02 seconds

Handy Robot that Conveys User's Emotion (사용자의 감정을 표현하는 소형 로봇)

  • Kim, Sung-Sik;Kim, Sang-Ho;Park, Jin-Kyu;Han, Chang-Hee;Kim, Wan-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.48-53
    • /
    • 2009
  • In this paper, we propose an efficient method of representing human emotions that are conveyed during conversations. In order to develop a robot that comes close to thinking, acting, and expressing like humans, many researches have been conducted. Among these researches, the proposed method is developed based upon 6 emotion identification systems. The proposed method first analyzes conversations between humans, decides an emotion on the basis of the analysis, and represents the emotion by an action, an image, and a sound. We implemented the proposed method using a hand-sized robot.

The interaction between emotion recognition through facial expression based on cognitive user-centered television (이용자 중심의 얼굴 표정을 통한 감정 인식 TV의 상호관계 연구 -인간의 표정을 통한 감정 인식기반의 TV과 인간의 상호 작용 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.23-28
    • /
    • 2014
  • In this study we focus on the effect of the interaction between humans and reactive television when emotion recognition through facial expression mechanism is used. Most of today's user interfaces in electronic products are passive and are not properly fitted into users' needs. In terms of the user centered device, we propose that the emotion based reactive television is the most effective in interaction compared to other passive input products. We have developed and researched next generation cognitive TV models in user centered. In this paper we present a result of the experiment that had been taken with Fraunhofer IIS $SHORE^{TM}$ demo software version to measure emotion recognition. This new approach was based on the real time cognitive TV models and through this approach we studied the relationship between humans and cognitive TV. This study follows following steps: 1) Cognitive TV systems can be on automatic ON/OFF mode responding to motions of people 2) Cognitive TV can directly select channels as face changes (ex, Neutral Mode and Happy Mode, Sad Mode, Angry Mode) 3) Cognitive TV can detect emotion recognition from facial expression of people within the fixed time and then if Happy mode is detected the programs of TV would be shifted into funny or interesting shows and if Angry mode is detected it would be changed to moving or touching shows. In addition, we focus on improving the emotion recognition through facial expression. Furthermore, the improvement of cognition TV based on personal characteristics is needed for the different personality of users in human to computer interaction. In this manner, the study on how people feel and how cognitive TV responds accordingly, plus the effects of media as cognitive mechanism will be thoroughly discussed.

Emotional System Applied to Android Robot for Human-friendly Interaction (인간 친화적 상호작용을 위한 안드로이드 로봇의 감성 시스템)

  • Lee, Tae-Geun;Lee, Dong-Uk;So, Byeong-Rok;Lee, Ho-Gil
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.95-98
    • /
    • 2007
  • 본 논문은 한국생산기술연구원에서 개발된 안드로이드 로봇(EveR Series) 플랫폼에 적용된 감성 시스템에 관한 내용을 제시한다. EveR 플랫폼은 얼굴 표정, 제스처, 음성합성을 수행 할 수 있는 플랫폼으로써 감성 시스템을 적용하여 인간 친화적인 상호작용을 원활하게 한다. 감성 시스템은 로봇에 동기를 부여하는 동기 모듈(Motivation Module), 다양한 감정들을 가지고 있는 감정 모듈(Emotion Module), 감정들, 제스처, 음성에 영향을 미치는 성격 모듈(Personality Module), 입력 받은 자극들과 상황들에 가중치를 결정하는 기억 모듈(Memory Module)로 구성되어 있다. 감성 시스템은 입력으로 음성, 텍스트, 비전, 촉각 및 상황 정보가 들어오고 감정의 선택과 가중치, 행동, 제스처를 출력하여 인간과의 대화에 있어서 자연스러움을 유도한다.

  • PDF

A Preliminary Study for Emotional Expression of Software Robot -Development of Hangul Processing Technique for Inference of Emotional Words- (소프트웨어 로봇의 감성 표현을 위한 기반연구 - 감성어 추론을 위한 한글 처리 기술 개발 -)

  • Song, Bok-Hee;Yun, Han-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2012.05a
    • /
    • pp.3-4
    • /
    • 2012
  • 사용자 중심의 man machine interface 기술의 발전은 사용자 인터페이스 기술과 인간공학의 접목으로 인하여 많은 진전이 있으며 계속 진행되고 있다. 근래의 정보전달은 사운드와 텍스트 또는 영상을 통하여 이루어지고 있으나, 감성적인 측면에서의 정보전달에 관한 연구는 활발하지 못한 실정이다. 특히, Human Computer Interaction분야에서 음성이나 표정의 전달에 관한 감성연구는 초기단계로 이모티콘이나 플래쉬콘 등이 감정전달을 위하여 사용되고 있으나 부자연스럽고 기계적인 실정이다. 본 연구는 사용자와 상호작용에서 컴퓨터 또는 응용소프트웨어 등이 자신의 가상객체(Software Robot, Sobot)를 활용하여 인간친화적인 상호작용을 제공하기위한 기반연구로써 한글에서 감성어를 추출하여 분류하고 처리하는 기술을 개발하여 컴퓨터가 전달하고자하는 정보에 인공감정을 이입시켜 사용자들의 감성만족도를 향상시키는데 적용하고자한다.

  • PDF

Nondestructive Interfacial Evaluation and fiber fracture Source Location of Single-Fiber/Epoxy Composite using Micromechanical Technique and Acoustic Emission (음향방출과 미세역학적시험법을 이용한 단일섬유강화 에폭시 복합재료의 비파지적 섬유파단 위치표정 및 계면물성 평가)

  • Park, Joung-Man;Kong, Jin-Woo;Kim, Dae-Sik;Yoon, Dong-Jin
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.5
    • /
    • pp.418-428
    • /
    • 2003
  • Fiber fracture is one of the dominant failure phenomena affecting the total mechanical Performance of the composites. Fiber fracture locations were measured through the conventional optical microscope and the nondestructive acoustic emission (AE) technique and then were compared together as a function of the epoxy matrix modulus and the fiber surface treatment by the electrodeposition method (ED). Interfacial shear strength (IFSS) was measured using tensile fragmentation test in combination of AE method. ED treatment of the fiber surface enlarged the number of fiber fracture locations in comparison to the untreated case. The number of fiber fracture events measured by the AE method was less than optically obtained one. However, fiber fracture locations determined by AE detection corresponded with those by optical observation with small errors. The source location of fiber breaks by AE analysis could be a nondestructive, valuable method to measure interfacial shear strength (IFSS) of matrix in non-, semi- and/or transparent polymer composites.

An Efficient Face Recognition by Using Centroid Shift and Mutual Information Estimation (중심이동과 상호정보 추정에 의한 효과적인 얼굴인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.4
    • /
    • pp.511-518
    • /
    • 2007
  • This paper presents an efficient face recognition method by using both centroid shift and mutual information estimation of images. The centroid shift is to move an image to center coordinate calculated by first moment, which is applied to improve the recognition performance by excluding the needless backgrounds in face image. The mutual information which is a measurements of correlations, is applied to efficiently measure the similarity between images. Adaptive partition mutual information(AP-MI) estimation is especially applied to find an accurate dependence information by equally partitioning the samples of input image for calculating the probability density function(PDF). The proposed method has been applied to the problem for recognizing the 48 face images(12 persons * 4 scenes) of 64*64 pixels. The experimental results show that the proposed method has a superior recognition performances(speed, rate) than a conventional method without centroid shift. The proposed method has also robust performance to changes of facial expression, position, and angle, etc. respectively.

Efficient Emotional Relaxation Framework with Anisotropic Features Based Dijkstra Algorithm

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.79-86
    • /
    • 2020
  • In this paper, we propose an efficient emotional relaxation framework using Dijkstra algorithm based on anisotropic features. Emotional relaxation is as important as emotional analysis. This is a framework that can automatically alleviate the person's depression or loneliness. This is very important for HCI (Human-Computer Interaction). In this paper, 1) Emotion value changing from facial expression is calculated using Microsoft's Emotion API, 2) Using these differences in emotion values, we recognize abnormal feelings such as depression or loneliness. 3) Finally, emotional mesh based matching process considering the emotional histogram and anisotropic characteristics is proposed, which suggests emotional relaxation to the user. In this paper, we propose a system which can recognize the change of emotion easily by using face image and train personal emotion by emotion relaxation.

Development of Smart Active Layer Sensor (I) : Theory and Concept Study (스마트 능동 레이어 센서 개발 (I): 이론 및 개념 연구)

  • Yoon, Dong-Jin;Lee, Young-Sup;Kwon, Jae-Hwa;Lee, Sang-Il
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.24 no.5
    • /
    • pp.465-475
    • /
    • 2004
  • This paper is the first part of the study on the development of a smart active layer (SAL) sensor, which consists of two parts. In this first part, the theory and concept of the SAL sensor is investigated, which is designed for the detection of elastic waves caused by internal cracks and damages in structures. For the development SAL sensor, (i) the basic theory of elastic waves was studied, (ii) the feasible study of the SAL as an elastic waves detection sensor using the finite element analysis (FEA) with respect to a piezoceramic disc was performed. (iii) the comparison of performances between some piezoceramic sensors and a commercial acoustic emission (AE) sensor was accomplished to ensure the applicability by the experimental means, such as a pencil lead break test. Also, the conceptional study for the SAL sensor, which can be utilized for the effective detection and locating of defects by the arrangement of regularly distributed sensors, was discussed.

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

A Study on Tracking a Moving Object using Photogrammetric Techniques - Focused on a Soccer Field Model - (사진측랑기법을 이용한 이동객체 추적에 관한 연구 - 축구장 모형을 중심으로 -)

  • Bae Sang-Keun;Kim Byung-Guk;Jung Jae-Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.2
    • /
    • pp.217-226
    • /
    • 2006
  • Extraction and tracking objects are fundamental and important steps of the digital image processing and computer vision. Many algorithms about extracting and tracking objects have been developed. In this research, a method is suggested for tracking a moving object using a pair of CCD cameras and calculating the coordinate of the moving object. A 1/100 miniature of soccer field was made to apply the developed algorithms. After candidates were selected from the acquired images using the RGB value of a moving object (soccer ball), the object was extracted using its size (MBR size) among the candidates. And then, image coordinates of a moving object are obtained. The real-time position of a moving object is tracked in the boundary of the expected motion, which is determined by centering the moving object. The 3D position of a moving object can be obtained by conducting the relative orientation, absolute orientation, and space intersection of a pair of the CCD camera image.