• Title/Summary/Keyword: Eye Tracking

Search Result 447, Processing Time 0.026 seconds

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

How do Formats of Health Related Facebook Posts Effect on Eye Movements and Cognitive Outcomes? (페이스북 건강정보 게시물 형식이 시각적 주의와 인지결과에 미치는 영향)

  • Yoon, JungWon;Syn, Sue Yeon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.3
    • /
    • pp.219-237
    • /
    • 2021
  • Visual information is widely used to deliver health information more effectively on social media, but there is lack of research on how effectively visual information delivers health information on social media. This study reports Facebook users' reading patterns and cognitive tests (recall and recognition tests) results using health-related Facebook posts. For this study, 21 college students participated in online questionnaire, eye tracking experiment, and recall and recognition tests. First, users paid their attention to the area that contains information (i.e., users focused on the main text rather than photos that do not contain information). Second, in the case of Facebook posts containing infographics, users paid their attention on the infographics, but the recall and recognition test results of the posts with infographics were lower than the posts containing photos. Particularly, when the infographics are in a complex collage format, recall and recognition tests result lower scores. Third, regarding the length of the text, the Facebook posts with short text resulted in higher recall and recognition test scores than the posts with medium or long texts. This study suggested to Facebook health information providers and distributors how to design Facebook posts for delivering health information more effectively.

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

ROS-based control for a robot manipulator with a demonstration of the ball-on-plate task

  • Khan, Khasim A.;Konda, Revanth R.;Ryu, Ji-Chul
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.113-127
    • /
    • 2018
  • Robotics and automation are rapidly growing in the industries replacing human labor. The idea of robots replacing humans is positively influencing the business thereby increasing its scope of research. This paper discusses the development of an experimental platform controlled by a robotic arm through Robot Operating System (ROS). ROS is an open source platform over an existing operating system providing various types of robots with advanced capabilities from an operating system to low-level control. We aim in this work to control a 7-DOF manipulator arm (Robai Cyton Gamma 300) equipped with an external vision camera system through ROS and demonstrate the task of balancing a ball on a plate-type end effector. In order to perform feedback control of the balancing task, the ball is designed to be tracked using a camera (Sony PlayStation Eye) through a tracking algorithm written in C++ using OpenCV libraries. The joint actuators of the robot are servo motors (Dynamixel) and these motors are directly controlled through a low-level control algorithm. To simplify the control, the system is modeled such that the plate has two-axis linearized motion. The developed system along with the proposed approaches could be used for more complicated tasks requiring more number of joint control as well as for a testbed for students to learn ROS with control theories in robotics.

Dynamic Characteristics Estimation of the Oculomotor control System using Band-Limited Pseudo Random Signals (의사 랜덤 신호에 의한 동안계의 동특성 추정)

  • 김성환;박상예
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.18 no.4
    • /
    • pp.12-20
    • /
    • 1981
  • In this paper, Band-limlted Gaussian Random Noise and PRBS(pseudo random hinary sequence) are used as a test signals to estimate the dynamic characteristics of the ocuiomotor system. Eye movements of the human subject are measured by E.O.G(electro-oculography) and the control characteristics of the oculomotor system are studied by random signal an-alysis based on the statistical communication theory. The conclusions are summerized as follows. (1) From the frequency response, the gain curve rises slightly at the regions of 0.7~0.9 Hz and 1.8~2 Hz due to the saccades which are occurred during usual tracking. (2) The average rate of information transfer by the oculomotor control system is 1.24 bits/sec, being calculated from the power spectral density and the cross spectral density for the Gaussian random input.

  • PDF

Implementation to human-computer interface system with motion tracking using OpenCV and FPGA (FPGA와 OpenCV를 이용한 눈동자 모션인식을 통한 의사소통 시스템)

  • Lee, Hee Bin;Heo, Seung Won;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.696-699
    • /
    • 2018
  • This paper introduces a system that enables pupillary tracing and communication with patients with amyotrophic lateral sclerosis (ALS) who can not move free. Face and pupil are tracked using OpenCV, and eye movements are detected using DE1-SoC board. We use the webcam, track the pupil, identify the pupil's movement according to the pupil coordinate value, and select the character according to the user's intention. We propose a system that can use relatively low development cost and FPGA can be reusable, and can select a text easily to mobile phone by using Bluetooth.

  • PDF

Implementation to eye motion tracking system using convolutional neural network (Convolutional neural network를 이용한 눈동자 모션인식 시스템 구현)

  • Lee, Seung Jun;Heo, Seung Won;Lee, Hee Bin;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.703-704
    • /
    • 2018
  • An artificial neural network design that traces the pupil for the disables suffering from Lou Gehrig disease is introduced. It grasps the position of the pupil required for the communication system. Tensorflow is used for generating and learning the neural network, and the pupil position is determined through the learned neural network. Convolution neural network(CNN) which consists of 2 stages of convolution layer and 2 layers of complete connection layer is implemented for the system.

  • PDF

Analysis of text entry task pattern according to the degree of skillfulness (숙련도 차이에 따른 문자 입력 작업 행태 분석)

  • Kim, Jung-Hwan;Lee, Suk-Jae;Myung, Ro-Hae
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02b
    • /
    • pp.1-6
    • /
    • 2007
  • 최근 다양한 기기와 환경에서 문자 입력에 대한 요구가 높아지고 있다. 이에 따라 효율적인 문자 입력 인터페이스 설계를 위해 문자 입력 인터페이스의 평가가 필요한 실정이다. 기존 연구를 살펴보면 문자 입력 시간을 시각 탐색 시간과 손가락 이동 시간으로 나누고 정보처리 이론인 Hick-Hyman Law와 Fitts’ Law를 통해 예측, 평가 하였다. 하지만 위 두 과정은 연속적(serial)인 과정으로 눈과 손의 coordination(협응)에 대해 관과 하는 한계가 있다. 또한, 기존 문자 입력 시간 예측 모델은 전문가라는 특정 숙련도를 가정하고 만들어졌기 때문에 실제 문자 입력 시간에 비해 과대 측정되어 왔다. 이에 본 연구는 문자 입력 시간 예측 모델에 눈-손 coordination 매개변수를 삽입하고자 눈-손 coordination의 시간을 측정하고 행태를 분석하였다. 또한, 비숙련자와 숙련자의 구분을 통해 시각 탐색 시간과 손 움직임 시간 그리고 눈-손 coordination의 시간 과 행태가 어떻게 변화하는 지 분석하였다. 그 결과 눈-손 coordination 시간은 문자 입력 시간과 밀접한 관계가 있었다, 그리고, 눈-손 coordination 시간은 숙련도에 상관없이 문자 입력 시간의 22%를 차지하였다. 또한, 숙련자와 비숙련자의 문자 입력 시간과 비교해 손과 coordination 시간 비율은 차이가 없었다. 하지만, 눈의 움직임 시간 비율은 큰 차이를 나타내었다. 이 결과는 눈-손 coordination과 숙련도 차이를 기존 문자 입력 예측 모델에 매개변수로써 적용하기 위한 기초 자료가 될 것이다.

  • PDF

A Study on the Visual Concentration and EEG Concentration on Cafe Facade (카페 파사드의 선호도에 따른 시각적 주의집중 및 뇌파 주의집중도 분석)

  • Kim, Sang-Hee;Lee, Jeong-Ho
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.3
    • /
    • pp.60-69
    • /
    • 2016
  • This experimental study measures the emotional and physiological responses of customers as to cafe facade design. It is done through eye-tracking and EEG response experiments. Specifically, their visual concentration and EEG concentration are analyzed in line with their facade preferences. The findings are as follows. First, the correlation between their facade preferences and visual concentration on facades is as follows: Highly preferable facades have a lower visual concentration frequency than the less preferable facades. Second, an analysis of $12{\times}12$ lattice division of facades shows that all facades have a high visual concentration for signs. The exceptions are F(6), F(7), F(8), and F(10). There is no correlation between the facade preferences and visual concentration behaviors for particular facade elements. Third, an analysis of prefrontal lobe's facade concentration shows that there is no correlation between the preferences and EEG concentration. However, there are big differences in the prefrontal lobe activity of 12 subjects depending on the facade. In particular, nine of them (3, 9, 13, 14, 15, 28, 36, 38, 43) show an activated prefrontal lobe as to the highly preferable facades-F(1), F(2), F(3), and F(4). However, such activation is not detected on the less preferable facades-F(9), F(10), F(11), and F(12).

A Study on Multiplication Expression Method by Visual Model (시각적 모델에 따른 곱셈식 표현 방법에 대한 연구)

  • Kim, Juchang;Lee, Kwnagho
    • Education of Primary School Mathematics
    • /
    • v.22 no.1
    • /
    • pp.65-82
    • /
    • 2019
  • In this study, students' multiplication expression method according to visual model was analyzed through paper test and eye tracking test. As a result of the paper-pencil test, students were presented with multiplication formula. In the group model (number of individual pieces in a group) ${\times}$ (number of group) in the array model (column) ${\times}$ (row), but in the array model, the proportion of students who answered the multiplication formula in the (row) ${\times}$ (column). From these results, we derived the appropriate model presentation method for multiplication instruction and the multiplication expression method for visual model.