• 제목/요약/키워드: facial gestures

검색결과 46건 처리시간 0.028초

독거노인용 가상 휴먼 제작 툴킷 (Virtual Human Authoring ToolKit for a Senior Citizen Living Alone)

  • Shin, Eunji;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제24권9호
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권5호
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

가상 캐릭터의 몸짓과 얼굴표정의 일치가 감성지각에 미치는 영향: 어떤 얼굴표정이 중요한가? (The Congruent Effects of Gesture and Facial Expression of Virtual Character on Emotional Perception: What Facial Expression is Significant?)

  • 류지헌;유승범
    • 한국콘텐츠학회논문지
    • /
    • 제16권5호
    • /
    • pp.21-34
    • /
    • 2016
  • 디지털 콘텐츠에서 구현되는 가상 캐릭터를 효과적으로 개발하기 위해서는 감성 상태(기쁨, 슬픔, 공포, 화남)가 제대로 전달되도록 하는 것이 중요하다. 이 연구에서는 가상 캐릭터의 얼굴표정과 몸짓이 표현하는 감성 상태의 일치여부에 따라서 사용자가 가상 캐릭터의 감성 상태를 어떻게 지각하며, 가상 캐릭터의 몸짓을 어떻게 평가하는가를 검증하였다. 이를 위하여 몸짓과 얼굴표정의 감성표현을 동일하게 구성한 일치조건, 정반대의 감성표현으로 구성된 불일치조건, 그리고 무표정한 얼굴표정으로 구현된 통제집단을 구성했다. 연구결과에 따르면 슬픔 조건에서 의도된 감성 상태가 제대로 전달되지 않았을 뿐만 아니라 오히려 화난 것으로 지각되었다. 그러나 나머지 감성 상태에서는 몸짓과 얼굴표정의 일치여부는 의도된 감성 전달에 영향을 미치지 않았다. 감성 상태의 표현에 대한 가상 캐릭터의 전체적인 몸짓에 대한 평가에서는 기쁨을 표현한 몸짓이 불일치 조건의 얼굴표정을 갖게 될 때, 평가 점수가 낮았다. 가상 캐릭터의 감성을 표현할 때는 얼굴 표정도 중요하지만 몸짓 자체의 감성표현이 전반적으로 결정적인 역할을 수행하고 있음을 확인할 수 있었다. 가상 캐릭터의 성별이나 연령과 같은 사회적 단서에 대한 연구의 필요성을 설명했다.

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발 (Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System)

  • 김도우;정기철;박원성
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 제38회 하계학술대회
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역 (Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints)

  • 김민채;김정은;김하영
    • 한국멀티미디어학회논문지
    • /
    • 제25권10호
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Human Robot Interaction Using Face Direction Gestures

  • Kwon, Dong-Soo;Bang, Hyo-Choong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.171.4-171
    • /
    • 2001
  • This paper proposes a method of human- robot interaction (HRI) using face directional gesture. A single CCD color camera is used to input face region, and the robot recognizes the face directional gesture based on the facial feature´s positions. One can give a command such as stop, go, left and right turn to the robot using the face directional gesture. Since the robot also has the ultra sonic sensors, it can detect obstacles and determine a safe direction at the current position. By combining the user´s command with the sensed obstacle configuration, the robot selects the safe and efficient motion direction. From simulation results, we show that the robot with HRI is more reliable for the robot´s navigation.

  • PDF

Method for Inference of Operators' Thoughts from Eye Movement Data in Nuclear Power Plants

  • Ha, Jun Su;Byon, Young-Ji;Baek, Joonsang;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • 제48권1호
    • /
    • pp.129-143
    • /
    • 2016
  • Sometimes, we need or try to figure out somebody's thoughts from his or her behaviors such as eye movement, facial expression, gestures, and motions. In safety-critical and complex systems such as nuclear power plants, the inference of operators' thoughts (understanding or diagnosis of a current situation) might provide a lot of opportunities for useful applications, such as development of an improved operator training program, a new type of operator support system, and human performance measures for human factor validation. In this experimental study, a novel method for inference of an operator's thoughts from his or her eye movement data is proposed and evaluated with a nuclear power plant simulator. In the experiments, about 80% of operators' thoughts can be inferred correctly using the proposed method.

키넥트 방식을 활용한 얼굴모션인식 데이터 제어에 관한 연구 (A Study on the Correction of Face Motion Recognition Data Using Kinect Method)

  • 이준상;박준홍
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2019년도 춘계학술대회
    • /
    • pp.513-515
    • /
    • 2019
  • 키넥트 적외선 프로젝터를 활용하여 깊이 값을 인지하는 기술은 계속 발전하고 있다. 사람의 움직임을 추적하는 기술은 마크리스 방식에서 비마크리스 방식으로 발전하고 있다. 키넥트를 활용한 얼굴 움직임에 대한 캡쳐는 정교하지 못한 단점을 가지고 있다. 또한 얼굴에 대한 제스처나 움직임을 실시간으로 제어 하는 방법은 많은 연구가 필요한다. 따라서 본 논문은 키넥트 적외선 방식을 활용하여 추출된 얼굴인식 데이터에 브랜딩 기술을 접목하고 제어하는 기술을 연구하여 자연스러운 3D 영상콘텐츠를 제작할 수 있는 기술을 제안한다.

  • PDF