• Title/Summary/Keyword: Eye-gaze Interface

Search Result 36, Processing Time 0.025 seconds

A Study on Controlling IPTV Interface Based on Tracking of Face and Eye Positions (얼굴 및 눈 위치 추적을 통한 IPTV 화면 인터페이스 제어에 관한 연구)

  • Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung;Lee, Hee-Kyung;Park, Min-Sik;Lee, Han-Kyu;Hong, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.930-939
    • /
    • 2010
  • Recently, many researches for making more comfortable input device based on gaze detection have been vigorously performed in human computer interaction. However, these previous researches are difficult to be used in IPTV environment because these methods need additional wearing devices or do not work at a distance. To overcome these problems, we propose a new way of controlling IPTV interface by using a detected face and eye positions in single static camera. And although face or eyes are not detected successfully by using Adaboost algorithm, we can control IPTV interface by using motion vectors calculated by pyramidal KLT (Kanade-Lucas-Tomasi) feature tracker. These are two novelties of our research compared to previous works. This research has following advantages. Different from previous research, the proposed method can be used at a distance about 2m. Since the proposed method does not require a user to wear additional equipments, there is no limitation of face movement and it has high convenience. Experimental results showed that the proposed method could be operated at real-time speed of 15 frames per second. Wd confirmed that the previous input device could be sufficiently replaced by the proposed method.

ROI Image Compression Method Using Eye Tracker for a Soldier (병사의 시선감지를 이용한 ROI 영상압축 방법)

  • Chang, HyeMin;Baek, JooHyun;Yang, DongWon;Choi, JoonSung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.3
    • /
    • pp.257-266
    • /
    • 2020
  • It is very important to share tactical information such as video, images, and text messages among soldiers for situational awareness. Under the wireless environment of the battlefield, the available bandwidth varies dynamically and is insufficient to transmit high quality images, so it is necessary to minimize the distortion of the area of interests such as targets. A natural operating method for soldiers is also required considering the difficulty in handling while moving. In this paper, we propose a natural ROI(region of interest) setting and image compression method for effective image sharing among soldiers. We verify the proposed method through prototype system design and implementation of eye gaze detection and ROI-based image compression.

Efficient way to input text through eye gazing method. (시선입력 인터페이스 시스템의 효율적 문자입력 방법)

  • Kwon, O-Jae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.289-298
    • /
    • 2007
  • The EGI system is a new communication method in limelight for helping disabled users to input and handle information on the computer more easily. However, due to the EGI system's "JEM(Jittery Eye Movements)" generation, it actually increases heavy psychological and physiological stresses for the user to input or perceive the target information on a machine. This study illustrates how to resolve this "JEM" issue and suggests a method that is easy and simple to be controled by anyone. A demo tool was built and tested to find and prove the reasons for "JEM" This test shows that that the case with snap up is less stressful than without to input text as a final result of the test evaluation in both psychological snap up and physiological brain wave test. Postnatal or naturally acquired, it is found that the disabled can have opportunities for smoother communication, and a possible efficient system development for better communication.

  • PDF

A Study on eye-tracking software design and development for e-sports viewing on the web (e 스포츠 웹 시청 연구를 위한 시선 분석도구 설계 및 개발)

  • Ko, Eunji;Choi, SunYoung
    • Journal of Korea Game Society
    • /
    • v.15 no.4
    • /
    • pp.121-132
    • /
    • 2015
  • This study suggests a design for an analytical software program and method for multitasking e-sports viewing through the web using an eye tracking device. To fulfill this task, we designed a Window of Interest (WOI) to measure and record visually on a screen wherever numerous multitasking activities occur. In addition, we developed an OBS (Opensource Broadcaster Software) plug-in that records and streams participant viewing behavior patterns in real time. The purpose of this study is as follows. First, unlike existing tools that limit web interface recording to still images, the developed tool can record dynamically via media such as videos. Second, when several windows are processed on a screen, the tool can accurately record the gaze positions of the participants. Lastly, the tool can enhance the objective validity of the data as it can be implemented in natural situations. Therefore, this study can trace natural viewing patterns and behavior as we do not create artificial experimental environments and stimuli.

A Study on Visibility Evaluation for Cabin Type Combine (캐빈형 콤바인의 시계성 평가에 관한 연구)

  • Choi, C.H.;Kim, J.D.;Kim, T.H.;Mun, J.H.;Kim, Y.J.
    • Journal of Biosystems Engineering
    • /
    • v.34 no.2
    • /
    • pp.120-126
    • /
    • 2009
  • The purpose of this study was to develop a visibility evaluation system for cabin type combine. Human's field of view was classified into five levels (perceptive, effective, stable gaze, induced, and auxiliary) depending on rotation of human's head and eye. Divider, reaper lever, gearshift, dashboard, and conveying part were considered as major viewpoints of combine. Visibilities of combine was evaluated quantitatively using the viewpoints and the human's field of view levels. The visibility evaluation system for cabin type combine was consisted of a laser pointer, stepping motors to control the direction of view, gyro sensors to measure horizontal and vertical angle, and I/O interface to acquire the signals. Tests were conducted with different postures ('sitting straight', 'sitting with $15^{\circ}$ tilt', 'standing straight', and 'standing with $15^{\circ}$ tilt'). The LSD (least significant difference) multiple comparison tests showed that the visibilities of viewpoints were different significantly as the operator's postures were changed. The results showed that the posture at standing with $15^{\circ}$ tilt provided the best visibility for operators. The divider of the combine was invisible due to blocking with the cabin frame at many postures. The reaper lever showed good visibilities at the postures of sitting or standing with $15^{\circ}$ tilt. The gearshift, the dashboard, and the conveying part had reasonable visibilities at the posture of sitting with $15^{\circ}$ tilt. However, most viewpoints of the combine were out of the stable gaze field of view level. Modifications of the combine design will be required to enhance the visibility during harvesting operation for farmers' safety and convenience.

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.