• Title/Summary/Keyword: Gaze Recognition

Search Result 47, Processing Time 0.023 seconds

An Implementation of Gaze Recognition System Based on SVM (SVM 기반의 시선 인식 시스템의 구현)

  • Lee, Kue-Bum;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.1-8
    • /
    • 2010
  • The researches about gaze recognition which current user gazes and finds the location have increasingly developed to have many application. The gaze recognition of existence all about researches have got problems because of using equipment that Infrared(IR) LED, IR camera and head-mounted of high price. This study propose and implement the gaze recognition system based on SVM using a single PC Web camera. The proposed system that divide the gaze location of 36 per 9 and 4 to recognize gaze location of 4 direction and 9 direction recognize user's gaze. Also, the proposed system had apply on image filtering method using difference image entropy to improve performance of gaze recognition. The propose system was implements experiments on the comparison of proposed difference image entropy gaze recognition system, gaze recognition system using eye corner and eye's center and gaze recognition system based on PCA to evaluate performance of proposed system. The experimental results, recognition rate of 4 direction was 94.42% and 9 direction was 81.33% for the gaze recognition system based on proposed SVM. 4 direction was 95.37% and 9 direction was 82.25%, when image filtering method using difference image entropy implemented. The experimental results proved the high performance better than existed gaze recognition system.

Robot Control Interface Using Gaze Recognition (시선 인식을 이용한 로봇 인터페이스 개발)

  • Park, Se Hyun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.1
    • /
    • pp.33-39
    • /
    • 2012
  • In this paper, we propose robot control interface using gaze recognition which is not limited by head motion. Most of the existing gaze recognition methods are working well only if the head is fixed. Furthermore the methods require a correction process per each person. The interface in this paper uses a camera with built-in infrared filter and 2 LED light sources to see what direction the pupils turn to and can send command codes to control the system, thus it doesn't need any correction process per each person. The experimental results showed that the proposed interface can control the system exactly by recognizing user's gaze direction.

Gaze Recognition Interface Development for Smart Wheelchair (지능형 휠체어를 위한 시선 인식 인터페이스 개발)

  • Park, S.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.5 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • In this paper, we propose a gaze recognition interface for smart wheelchair. The gaze recognition interface is a user interface which recognize the commands using the gaze recognition and avoid the detected obstacles by sensing the distance through range sensors on the way to driving. Smart wheelchair is composed of gaze recognition and tracking module, user interface module, obstacle detector, motor control module, and range sensor module. The interface in this paper uses a camera with built-in infra red filter and 2 LED light sources to see what direction the pupils turn to and can send command codes to control the system, thus it doesn't need any correction process per each person. The results of the experiment showed that the proposed interface can control the system exactly by recognizing user's gaze direction.

Gaze Recognition System using Random Forests in Vehicular Environment based on Smart-Phone (스마트 폰 기반 차량 환경에서의 랜덤 포레스트를 이용한 시선 인식 시스템)

  • Oh, Byung-Hun;Chung, Kwang-Woo;Hong, Kwang-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.191-197
    • /
    • 2015
  • In this paper, we propose the system which recognize the gaze using Random Forests in vehicular environment based on smart-phone. Proposed system is mainly composed of the following: face detection using Adaboost, face component estimation using Histograms, and gaze recognition based on Random Forests. We detect a driver based on the image information with a smart-phone camera, and the face component of driver is estimated. Next, we extract the feature vectors from the estimated face component and recognize gaze direction using Random Forest recognition algorithm. Also, we collected gaze database including a variety gaze direction in real environments for the experiment. In the experiment result, the face detection rate and the gaze recognition rate showed 82.02% and 84.77% average accuracies, respectively.

An Implementation of Gaze Direction Recognition System using Difference Image Entropy (차영상 엔트로피를 이용한 시선 인식 시스템의 구현)

  • Lee, Kue-Bum;Chung, Dong-Keun;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.93-100
    • /
    • 2009
  • In this paper, we propose a Difference Image Entropy based gaze direction recognition system. The Difference Image Entropy is computed by histogram levels using the acquired difference image of current image and reference images or average images that have peak positions from $-255{\sim}+255$ to prevent information omission. There are two methods about the Difference Image Entropy based gaze direction. 1) The first method is to compute the Difference Image Entropy between an input image and average images of 45 images in each location of gaze, and to recognize the directions of user's gaze. 2) The second method is to compute the Difference Image Entropy between an input image and each 45 reference images, and to recognize the directions of user's gaze. The reference image is created by average image of 45 images in each location of gaze after receiving images of 4 directions. In order to evaluate the performance of the proposed system, we conduct comparison experiment with PCA based gaze direction system. The directions of recognition left-top, right-top, left-bottom, right-bottom, and we make an experiment on that, as changing the part of recognition about 45 reference images or average image. The experimental result shows that the recognition rate of Difference Image Entropy is 97.00% and PCA is 95.50%, so the recognition rate of Difference Image Entropy based system is 1.50% higher than PCA based system.

Human Activity Recognition using Model-based Gaze Direction Estimation (모델 기반의 시선 방향 추정을 이용한 사람 행동 인식)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.9-18
    • /
    • 2011
  • In this paper, we propose a method which recognizes human activity using model-based gaze direction estimation in an indoor environment. The method consists of two steps. First, we detect a head region and estimate its gaze direction as prior information in the human activity recognition. We use color and shape information for the detection of head region and use Bayesian Network model representing relationships between a head and a face for the estimation of gaze direction. Second, we recognize event and scenario describing the human activity. We use change of human state for the event recognition and use a rule-based method with combination of events and some constraints. We define 4 types of scenarios related to the gaze direction. We show performance of the gaze direction estimation and human activity recognition with results of experiments.

A Study on Fashion Design Cognition Using Eye Tracking (시선 추적을 활용한 패션 디자인 인지에 관한 연구)

  • Lee, Shin-Young
    • Fashion & Textile Research Journal
    • /
    • v.23 no.3
    • /
    • pp.323-336
    • /
    • 2021
  • This study investigated the cognitive process of fashion design images through eye activity tracking. Differences in the cognitive process and gaze activity according to image elements were confirmed. The results of the study are as follows. First, a difference was found between groups in the gaze time for each section according to the model and design. Although model diversity is an important factor leading the interest of observers, the simplicity of the model was deemed more effective for observing the design. Second, the examination of the differences by segments regarding the gaze weight of the image area showed differences for each group. When a similar type of model is repeated, the proportion of face recognition decreases, and the proportion of design recognition time increases. Conversely, when the model diversity is high, the same amount of time is devoted to recognizing the model's face in all the processes. Additionally, there was a difference in the gaze activity in recognizing the same design according to the type of model. These results enabled the confirmation of the importance of the model as an image recognition factor in fashion design. In the fashion industry, it is important to find a cognitive factor that attracts and retains consumers' attention. If the design recognition effect is further maximized by finding service points to be utilized, the brand's sustainability is expected to be enhanced even in the rapidly changing fashion industry.

Development of a Non-contact Input System Based on User's Gaze-Tracking and Analysis of Input Factors

  • Jiyoung LIM;Seonjae LEE;Junbeom KIM;Yunseo KIM;Hae-Duck Joshua JEONG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.9-15
    • /
    • 2023
  • As mobile devices such as smartphones, tablets, and kiosks become increasingly prevalent, there is growing interest in developing alternative input systems in addition to traditional tools such as keyboards and mouses. Many people use their own bodies as a pointer to enter simple information on a mobile device. However, methods using the body have limitations due to psychological factors that make the contact method unstable, especially during a pandemic, and the risk of shoulder surfing attacks. To overcome these limitations, we propose a simple information input system that utilizes gaze-tracking technology to input passwords and control web surfing using only non-contact gaze. Our proposed system is designed to recognize information input when the user stares at a specific location on the screen in real-time, using intelligent gaze-tracking technology. We present an analysis of the relationship between the gaze input box, gaze time, and average input time, and report experimental results on the effects of varying the size of the gaze input box and gaze time required to achieve 100% accuracy in inputting information. Through this paper, we demonstrate the effectiveness of our system in mitigating the challenges of contact-based input methods, and providing a non-contact alternative that is both secure and convenient.

EOG-based User-independent Gaze Recognition using Wavelet Coefficients and Dynamic Positional Warping (웨이블릿 계수와 Dynamic Positional Warping을 통한 EOG기반의 사용자 독립적 시선인식)

  • Chang, Won-Du;Im, Chang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1119-1130
    • /
    • 2018
  • Writing letters or patterns on a virtual space by moving a person's gaze is called "eye writing," which is a promising tool for various human-computer interface applications. This paper investigates the use of conventional eye writing recognition algorithms for the purpose of user-independent recognition of eye-written characters. Two algorithms are presented to build the user-independent system: eye-written region extraction using wavelet coefficients and template generation. The experimental results of the proposed system demonstrated that with dynamic positional warping, an F1 score of 79.61% was achieved for 12 eye-written patterns, thereby indicating the possibility of user-independent use of eye writing.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF