• Title/Summary/Keyword: Gaze-Tracking

Search Result 166, Processing Time 0.024 seconds

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eun-ju;Yoo, In-jin;Park, Do-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.409-414
    • /
    • 2021
  • 본 연구는 소셜로봇 디자인 연구의 흐름 중 하나인 로봇의 외형에 관하여 시선 추적을 활용하고자 한다. 소셜로봇의 몸 전체, 얼굴, 눈, 입술 등의 관심 영역으로부터 측정된 사용자의 시선 추적 지표와 디자인평가 설문을 통하여 파악된 사용자의 태도를 연결하여 소셜로봇의 디자인에 연구 모형을 구성하였다. 구체적으로 로봇에 대한 사용자의 태도를 형성하는 메커니즘을 발견하여 로봇 디자인 시 참고할 수 있는 구체적인 인사이트를 발굴하고자 하였다. 구체적으로 본 연구에서 사용된 시선 추적 지표는 고정된 시간(Fixation), 첫 응시 시간(First Visit), 전체 머문 시간(Total Viewed), 그리고 재방문 횟수(Revisits)이며, 관심 영역인 AOI(Areas of Interests)는 소셜로봇의 얼굴, 눈, 입술, 그리고 몸체로 설계하였다. 그리고 디자인평가 설문을 통하여 소셜로봇의 감정 표현, 인간다움, 얼굴 두각성 등의 소비자 신념을 수집하였고, 종속변수로 로봇에 대한 태도로 설정하였다.

  • PDF

Communication Support System for ALS Patient Based on Text Input Interface Using Eye Tracking and Deep Learning Based Sound Synthesi (눈동자 추적 기반 입력 및 딥러닝 기반 음성 합성을 적용한 루게릭 환자 의사소통 지원 시스템)

  • Park Hyunjoo;Jeong Seungdo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.20 no.2
    • /
    • pp.27-36
    • /
    • 2024
  • Accidents or disease can lead to acquired voice dysphonia. In this case, we propose a new input interface based on eye movements to facilitate communication for patients. Unlike the existing method that presents the English alphabet as it is, we reorganized the layout of the alphabet to support the Korean alphabet and designed it so that patients can enter words by themselves using only eye movements, gaze, and blinking. The proposed interface not only reduces fatigue by minimizing eye movements, but also allows for easy and quick input through an intuitive arrangement. For natural communication, we also implemented a system that allows patients who are unable to speak to communicate with their own voice. The system works by tracking eye movements to record what the patient is trying to say, then using Glow-TTS and Multi-band MelGAN to reconstruct their own voice using the learned voice to output sound.

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eunju;Yoo, In-jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.243-262
    • /
    • 2022
  • In this study, eye tracking was used for the appearance of the robot during the social robot design study. During the research, each part of the social robot was designated as AOI (Areas of Interests), and the user's attitude was measured through a design evaluation questionnaire to construct a design research model of the social robot. The data used in this study are Fixation, First Visit, Total Viewed, and Revisits as eye tracking indicators, and AOI (Areas of Interests) was designed with the face, eyes, lips, and body of the social robot. And as design evaluation questionnaire questions, consumer beliefs such as Face-highlighted, Human-like, and Expressive of social robots were collected and as a dependent variable was attitude toward robots. Through this, we tried to discover the mechanism that specifically forms the user's attitude toward the robot, and to discover specific insights that can be referenced when designing the robot.

Utilizing Usability Metrics to Evaluate a Subway Map Design

  • Jung, Kwang Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.343-353
    • /
    • 2017
  • Objective: This study aims to evaluate the efficiency of two representative subway map design types, namely a diagram type and a geographical type using physiological metrics, performance metrics, and self-reported metrics, which are representative usability metrics. Background: Subway maps need to be designed in order for users to quickly search and recognize subway line information. Although most cities' subway maps currently use the diagram type designed by Henry Beck, New York City's subway map has recently been changed to the subway map type combined with the geographical type designed by Michael Hertz. However, not many studies on its efficiency are found, and the studies that are available mainly depend on questionnaire surveys or take on a subjective behavioral study type based on experts' experiences. In this regard, evaluation through a more objective method is needed. Method: This study employed usability metrics as a method to evaluate the efficiency of information search targeting the diagram type and geographical type subway maps used mostly as subway maps. To this end, physiological metrics obtained through eye tracking, task completion time, representative metric of task performance, and subjective evaluation metrics were used for the suitability evaluation of subway map designs. Results: In the result of gaze movement distance analysis, no significant difference was shown in the two design types in terms of a process finding a departure station from the starting point and a process finding a transfer station between the departure station and arrival station (destination). However, the gaze movement distance in the process finding the arrival station at the departure station was significantly shorter in the geographical type, rather than in the diagram type. The analysis of task completion time showed a result similar to the gaze movement distance analysis result. Task completion time was significantly shorter in the geographical type, rather than in the diagram type, which is in the process finding the arrival station at the departure station. In other information search processes, no significant difference was shown. As a result of subjective evaluation metrics analysis, no significant difference was revealed in the two design types. Conclusion: An analysis on the two representative subway map design types was carried out via the adoption of usability metrics. As a result, although no significant difference was shown in some information search processes, it was revealed that information search was easier in the geographical type overall. Also, it was found that usability metrics can be effectively used to evaluate the design types of subway maps. Application: The study results can be used to set design direction to offer ease in information search on subway lines. The study also can be used as a method to evaluate a subway map's design type.

Analysis of Players' Eye-Movement Patterns by Playing Experience in FPS Game (FPS게임 플레이경험에 따른 플레이어의 시선경로 패턴 분석)

  • Choi, GyuHyeok;Kim, Mijin
    • Smart Media Journal
    • /
    • v.5 no.2
    • /
    • pp.33-41
    • /
    • 2016
  • FPS Games are usually centered on a combat game play where the player plays through a first-person perspective as the in-game character, in order to strike the opponent in accordance with each level's objective. In such type of game play, the decision making that leads the player to take certain actions is carried out based on the player's visual cognitive information, and information collected both directly/indirectly via previous game play experiences. Particularly in the case of a FPS game where the mutual interaction between the player and each game level is the key, an analysis of a FPS game player's visual cognitive information can provide intelligence which can help design or adjust structures of a game level. For this thesis, a sample group has been collected and divided into a novice group and an expert group based on their level of experience with FPS games. Then, using eye-tracking equipments, the point of gaze of players in each group were recorded whilst they were playing levels of a well-known FPS title. The point of gaze in the moment the player starts to take actions -right before/after the start of a combat- was recorded in 500 play videos, and as a result each group's intrinsic pattern of gaze could be identified. Through these results, the author plans to develop a methodology that can enhance the difficulty setting and the playability of FPS game levels.

Robust Eye Region Discrimination and Eye Tracking to the Environmental Changes (환경변화에 강인한 눈 영역 분리 및 안구 추적에 관한 연구)

  • Kim, Byoung-Kyun;Lee, Wang-Heon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1171-1176
    • /
    • 2014
  • The eye-tracking [ET] is used on the human computer interaction [HCI] analysing the movement status as well as finding the gaze direction of the eye by tracking pupil's movement on a human face. Nowadays, the ET is widely used not only in market analysis by taking advantage of pupil tracking, but also in grasping intention, and there have been lots of researches on the ET. Although the vision based ET is known as convenient in application point of view, however, not robust in changing environment such as illumination, geometrical rotation, occlusion and scale changes. This paper proposes two steps in the ET, at first, face and eye regions are discriminated by Haar classifier on the face, and then the pupils from the discriminated eye regions are tracked by CAMShift as well as Template matching. We proved the usefulness of the proposed algorithm by lots of real experiments in changing environment such as illumination as well as rotation and scale changes.

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.