• Title/Summary/Keyword: Head-Tracker

Search Result 44, Processing Time 0.024 seconds

Real Time Gaze Discrimination for Human Computer Interaction (휴먼 컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Park Ho sik;Bae Cheol soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.125-132
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5°horizontally and 8°vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

Real Time Gaze Discrimination for Computer Interface (컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Hwang, Suen-Ki;Kim, Moon-Hwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.38-46
    • /
    • 2010
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about $5^{\circ}$horizontally and $8^{\circ}$vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

  • PDF

A Study on Real Time Gaze Discrimination System using GRNN (GRNN을 이용한 실시간 시선 식별 시스템에 관한 연구)

  • Lee Young-Sik;Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.322-329
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNS). With GRNNS, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10$\%$ improvement in classification error. The angular gaze accuracy is about $5^{circ}$horizontally and $8^{circ}$vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Systemic Development of Tele-Robotic Interface for the Hot-Line Maintenance (활선 작업을 위한 원격 조종 인터페이스 개발)

  • Kim Min-Soeng;Lee Ju-Jang;Kim Chang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.12
    • /
    • pp.1217-1222
    • /
    • 2004
  • This paper describes the development of tele-robotic interface for the hot-line maintenance robot system. One of main issues in designing human-robot interface for the hot-line maintenance robot system is to plan the control procedure for each part of the robotic system. Another issue is that the actual degree of freedom (DOF) in the hot-line maintenance robot system is much greater than that of available control devices such as joysticks and gloves in the remote-cabin. For this purpose, a virtual simulator, which includes the virtual hot-line maintenance robot system and the environment, is developed in the 3D environment using CAD data. It is assumed that the control operation is done in the remote cabin and the overall work process is observed using the main-camera with 2 DOFs. For the input device, two joysticks, one pedal, two data gloves, and a Head Mounted Display (HMD) with tracker sensor were used. The interface is developed for each control mode. Designed human-interface system is operated using high-level control commands which are intuitive and easy to understand without any special training.

Autostereoscopic 3D display system with moving parallax barrier and eye-tracking (이동형 패럴랙스배리어와 시점 추적을 이용한 3D 디스플레이 시스템)

  • Chae, Ho-Byung;Ryu, Young-Roc;Lee, Gang-Sung;Lee, Seung-Hyun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.419-427
    • /
    • 2009
  • We present a novel head tracking system for stereoscopic displays that ensures the viewer has a high degree of movement. The tracker is capable of segmenting the viewer from background objects using their relative distance. A depth camera using TOF(Time-Of-Flight) is used to generate a key signal for eye tracking application. A method of the moving parallax barrier is also introduced to supplement a disadvantage of the fixed parallax barrier that provides observation at the specific locations.

The Effect of the Laptop Computer Stand to Maintain the Good Posture of Neck (랩톱 컴퓨터 스탠드의 목 자세 개선효과 분석)

  • Oh, Imsuk;Lee, Jaehyun;Chee, Youngjoon
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.291-294
    • /
    • 2017
  • It is known that laptop computer stand is helpful to maintain the good posture while using laptop computer on the desk. But the quantitative validation of its effect has not been reported. Using the wearable neck posture tracker, the forward flexion angle of the neck can be measured in daily life. In this study, the forward flexion angles of the neck while using the laptop computer with and without laptop computer stand were compared. From the posture data of 10 subjects for 6 hours, the average of the forward flexion angle was 0.9 degree with laptop computer stand and 16.3 degree without laptop computer stand. As the conclusion, laptop computer stand can decrease the forward flexion angle which is known as forward head posture while using the laptop computer on the desk.

ROI Image Compression Method Using Eye Tracker for a Soldier (병사의 시선감지를 이용한 ROI 영상압축 방법)

  • Chang, HyeMin;Baek, JooHyun;Yang, DongWon;Choi, JoonSung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.3
    • /
    • pp.257-266
    • /
    • 2020
  • It is very important to share tactical information such as video, images, and text messages among soldiers for situational awareness. Under the wireless environment of the battlefield, the available bandwidth varies dynamically and is insufficient to transmit high quality images, so it is necessary to minimize the distortion of the area of interests such as targets. A natural operating method for soldiers is also required considering the difficulty in handling while moving. In this paper, we propose a natural ROI(region of interest) setting and image compression method for effective image sharing among soldiers. We verify the proposed method through prototype system design and implementation of eye gaze detection and ROI-based image compression.

A Study on Dynamic Characteristic Analysis for the Industrial Monorail Vehicle (산업용 단선 궤도 차량의 주행 동특성에 관한 연구)

  • Lee Soo-Ho;Jung Il-Ho;Lee Hyung;Park Joong-Kyung;Park Tae-Won
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.7 s.238
    • /
    • pp.1005-1012
    • /
    • 2005
  • An OHT(Over Head Transportation) vehicle is an example of the industrial monorail vehicle, and it is used in the automobile, semiconductor, LCD manufacturing industries. OHT vehicle is moved by main wheels and guide rollers. The major function of the main wheel is to support and drive the OHT vehicle. The roles of the guide roller is the inhibition of derailment and steering of the OHT vehicle. Since the required vehicle velocity becomes faster and the required load capacity is increased, the durability characteristics of the wheel and roller, which was made of urethane, need to be increased. So it is necessary to estimate the fatigue life cycle of the wheel and roller. In this study, OHT dynamic model was developed by using the multi body dynamic analysis program ADAMS. Wheel and roller are modeled by the 3-D surface contact module. Especially, motor cycle tire mechanics is used in the wheel contact model. The OHT dynamic model can analyze the dynamic characteristic of the OHT vehicle with various driving conditions. And the result was verified by a vehicle traveling test. As a result of this study, the developed model is expected to predict wheel dynamic load time history and makes a contribution to design of a new monorail vehicle.

Face Tracking Combining Active Contour Model and Color-Based Particle Filter (능동적 윤곽 모델과 색상 기반 파티클 필터를 결합한 얼굴 추적)

  • Kim, Jin-Yul;Jeong, Jae-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.10
    • /
    • pp.2090-2101
    • /
    • 2015
  • We propose a robust tracking method that combines the merits of ACM(active contour model) and the color-based PF(particle filter), effectively. In the proposed method, PF and ACM track the color distribution and the contour of the target, respectively, and Decision part merges the estimate results from the two trackers to determine the position and scale of the target and to update the target model. By controlling the internal energy of ACM based on the estimate of the position and scale from PF tracker, we can prevent the snake pointers from falsely converging to the background clutters. We appled the proposed method to track the head of person in video and have conducted computer experiments to analyze the errors of the estimated position and scale.