• Title/Summary/Keyword: User tracking

Search Result 603, Processing Time 0.023 seconds

Object Tracking System for Additional Service Providing under Interactive Broadcasting Environment (대화형 방송 환경에서 부가서비스 제공을 위한 객체 추적 시스템)

  • Ahn, Jun-Han;Byun, Hye-Ran
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.1
    • /
    • pp.97-107
    • /
    • 2002
  • In general, under interactive broadcasting environment, user finds additional service using top-down menu. However, user can't know that additional service provides information until retrieval has finished and top-down menu requires multi-level retrieval. This paper proposes the new method for additional service providing not using top-down menu but using object selection. For the purpose of this method, the movie of a MPEG should be synchronized with the object information(position, size, shape) and object tracking technique is required. Synchronization technique uses the Directshow provided by the Microsoft. Object tracking techniques use a motion-based tracking and a model-based tracking together. We divide object into two parts. One is face and the other is substance. Face tracking uses model-based tracking and Substance uses motion-based tracking base on the block matching algorithm. To improve precise tracking, motion-based tracking apply the temporal prediction search algorithm and model-based tracking apply the face model which merge ellipse model and color model.

Investigating Key User Experience Factors for Virtual Reality Interactions

  • Ahn, Junyoung;Choi, Seungho;Lee, Minjae;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.267-280
    • /
    • 2017
  • Objective: The aim of this study is to investigate key user experience factors of interactions for Head Mounted Display (HMD) devices in the Virtual Reality Environment (VRE). Background: Virtual reality interaction research has been conducted steadily, while interaction methods and virtual reality devices have improved. Recently, all of the virtual reality devices are head mounted display based ones. Also, HMD-based interaction types include Remote Controller, Head Tracking, and Hand Gesture. However, there is few study on usability evaluation of virtual reality. Especially, the usability of HMD-based virtual reality was not investigated. Therefore, it is necessary to study the usability of HMD-based virtual reality. Method: HMD-based VR devices released recently have only three interaction types, 'Remote Controller', 'Head Tracking', and 'Hand Gesture'. We search 113 types of research to check the user experience factors or evaluation scales by interaction type. Finally, the key user experience factors or relevant evaluation scales are summarized considering the frequency used in the studies. Results: There are various key user experience factors by each interaction type. First, Remote controller's key user experience factors are 'Ease of learning', 'Ease of use', 'Satisfaction', 'Effectiveness', and 'Efficiency'. Also, Head tracking's key user experience factors are 'Sickness', 'Immersion', 'Intuitiveness', 'Stress', 'Fatigue', and 'Ease of learning'. Finally, Hand gesture's key user experience factors are 'Ease of learning', 'Ease of use', 'Feedback', 'Consistent', 'Simple', 'Natural', 'Efficiency', 'Responsiveness', 'Usefulness', 'Intuitiveness', and 'Adaptability'. Conclusion: We identified key user experience factors for each interaction type through literature review. However, we did not consider objective measures because each study adopted different performance factors. Application: The results of this study can be used when evaluating HMD-based interactions in virtual reality in terms of usability.

Development of 3-D viewer for indoor location tracking system using wireless sensor network

  • Yang, Chi-Shian;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.16 no.2
    • /
    • pp.110-114
    • /
    • 2007
  • In this paper we present 3-D Navigation View, a three-dimensional visualization of indoor environment which serves as an intuitive and unified user interface for our developed indoor location tracking system via Virtual Reality Modeling Language (VRML) in web environment. The extracted user's spatial information from indoor location tracking system was further processed to facilitate the location indication in virtual 3-D indoor environment based on his location in physical world. External Authoring Interface (EAI) provided by VRML enables the integration of interactive 3-D graphics into web and direct communication with the encapsulated Java applet to update position and viewpoint of user periodically in 3-D indoor environment. As any web browser with VRML viewer plug-in is able to run the platform independent 3-D Navigation View, specialized and expensive hardware or software can be disregarded.

User-Calibration Free Gaze Tracking System Model (사용자 캘리브레이션이 필요 없는 시선 추적 모델 연구)

  • Ko, Eun-Ji;Kim, Myoung-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1096-1102
    • /
    • 2014
  • In remote gaze tracking system using infra-red LEDs, calibrating the position of reflected light is essential for computing pupil position in captured images. However, there are limitations in reducing errors because variable locations of head and unknown radius of cornea are involved in the calibration process as constants. This study purposes a gaze tracking method based on pupil-corneal reflection that does not require user-calibration. Our goal is to eliminate the correction process of glint positions, which require a prior calibration, so that the gaze calculation is simplified.

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.

Design of a User-Friendly Control System using Least Control Parameters (최소 제어 인자 도출을 통한 사용편의성 높은 제어시스템 설계)

  • Heo, Youngjin;Park, Daegil;Kim, Jinhyun
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.1
    • /
    • pp.67-77
    • /
    • 2014
  • An electric motor is the one of the most important parts in robot systems, which mainly drives the wheel of mobile robots or the joint of manipulators. According to the requirement of motor performance, the controller type and parameters vary. For the wheel driving motors, a speed tracking controller is used, while a position tracking controller is required for the joint driving motors. Moreover, if the mechanical parameters are changed or a different motor is used, we might have to tune again the controller parameters. However, for the beginners who are not familiar about the controller design, it is hard to design pertinently. In this paper, we develop a nominal robust controller model for the velocity tracking of wheel driving motors and the position tracking of joint driving motors based on the disturbance observer (DOB) which can reject disturbances, modeling errors, and dynamic parameter variations, and propose the methodology for the determining the least control parameters. The proposed control system enables the beginners to easily construct a controller for the newly designed robot system. The purpose of this paper is not to develop a new controller theory, but to increase the user-friendliness. Finally, simulation and experimental verification have performed through the actual wheel and joint driving motors.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

Development of Cultural Contents using Auger Reality Based Markerless Tracking

  • Kang, Hanbyeol;Park, DaeWon;Lee, SangHyun
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.57-65
    • /
    • 2016
  • This paper aims to improve the quality of cultural experience by providing a three - dimensional guide service that enables users to experience themselves without additional guides and cultural commentators using the latest mobile IT technology to enhance understanding of cultural heritage. In this paper, we propose a method of constructing cultural contents based on location information such as user / cultural heritage using markerless tracking based augmented reality and GPS. We use marker detection technology and markerless tracking technology to recognize smart augmented reality object accurately and accurate recognition according to the state of cultural heritage, and also use Android's Google map to locate the user. The purpose of this paper is to produce content for introducing cultural heritage using GPS and augmented reality based on Android. It can be used in combination with various objects beyond the limitation of existing augmented reality contents.

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

Robot Driving System and Sensors Implementation for a Mobile Robot Capable of Tracking a Moving Target (이동물체 추적 가능한 이동형 로봇구동 시스템 설계 및 센서 구현)

  • Myeong, Ho Jun;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.22 no.3_1spc
    • /
    • pp.607-614
    • /
    • 2013
  • This paper proposes a robot driving system and sensor implementation for use with an education robot. This robot has multiple functions and was designed so that children could use it with interest and ease. The robot recognizes the location of a user and follows that user at a specific distance when the robot and user communicate with each other. In this work, the robot was designed and manufactured to evaluate its performance. In addition, an embedded board was installed with the purpose of communicating with a smart phone, and a camera mounted on the robot allowed it to monitor the environment. To allow the robot to follow a moving user, a set of sensors combined with an RF module and ultrasonic sensors were adopted to measure the distance between the user and the robot. With the help of this ultrasonic sensors arrangement, the location of the user couldbe identified in all directions, which allowed the robot to follow the moving user at the desired distance. Experiments were carried out to see how well the user's location could be recognized and to investigate how accurately the robot trackedthe user, which eventually yielded a satisfactory performance.