• Title/Summary/Keyword: Head-tracking

Search Result 246, Processing Time 0.026 seconds

Survey on Mixed Reality R&D (혼합현실 기술 연구개발 동향 및 전망)

  • Lee, Sang-Goog
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.1-15
    • /
    • 2007
  • In this paper, we review relevant technologies of MR (Mixed Reality) and show important components of perspective that can overcome technical limitations of the current MR. An MR technology combines real and virtual objects in a real environment, and runs interactive in real time, and is regarded as an emerging technology in a large part of the future of IT (Information Technology). We've grouped the major obstacles limiting the wider use of MR technologies into three themes: technological limitations (i,e., tracking, rendering, authoring, and registration), user interface limitations(i.e. UI metaphor for MR interaction), and social acceptance Issues.

  • PDF

Unconstrained e-Book Control Program by Detecting Facial Characteristic Point and Tracking in Real-time (얼굴의 특이점 검출 및 실시간 추적을 이용한 e-Book 제어)

  • Kim, Hyun-Woo;Park, Joo-Yong;Lee, Jeong-Jick;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.2
    • /
    • pp.14-18
    • /
    • 2014
  • This study is about e-Book program based on human-computer interaction(HCI) system for physically handicapped person. By acquiring background knowledge of HCI, we know that if we use vision-based interface we can replace current computer input devices by extracting any characteristic point and tracing it. We decided betweeneyes as a characteristic point by analyzing facial input image using webcam. But because of three-dimensional structure of glasses, the person who is wearing glasses wasn't suitable for tracing between-eyes. So we changed characteristic point to the bridge of the nose after detecting between-eyes. By using this technique, we could trace rotation of head in real-time regardless of glasses. To test this program's usefulness, we conducted an experiment to analyze the test result on actual application. Consequently, we got 96.5% rate of success for controlling e-Book under proper condition by analyzing the test result of 20 subjects.

Energy-Efficient Adaptive Dynamic Sensor Scheduling for Target Monitoring in Wireless Sensor Networks

  • Zhang, Jian;Wu, Cheng-Dong;Zhang, Yun-Zhou;Ji, Peng
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.857-863
    • /
    • 2011
  • Due to uncertainties in target motion and randomness of deployed sensor nodes, the problem of imbalance of energy consumption arises from sensor scheduling. This paper presents an energy-efficient adaptive sensor scheduling for a target monitoring algorithm in a local monitoring region of wireless sensor networks. Owing to excessive scheduling of an individual node, one node with a high value generated by a decision function is preferentially selected as a tasking node to balance the local energy consumption of a dynamic clustering, and the node with the highest value is chosen as the cluster head. Others with lower ones are in reserve. In addition, an optimization problem is derived to satisfy the problem of sensor scheduling subject to the joint detection probability for tasking sensors. Particles of the target in particle filter algorithm are resampled for a higher tracking accuracy. Simulation results show this algorithm can improve the required tracking accuracy, and nodes are efficiently scheduled. Hence, there is a 41.67% savings in energy consumption.

A Design on Sub-Motion System for Full Body Tracking (풀 바디 트래킹을 위한 서브 모션 시스템 설계)

  • Kim, Hoyong;Wu, Guoqing;Sung, Yunsick
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.889-891
    • /
    • 2018
  • 가상현실 (Virtual Reality, VR) 컨텐츠가 다양해지면서 사용자들의 관심도 높아지고 있다. 초기 VR 컨텐츠는 헤드 마운티드 디스플레이 (Head Mounted Display, HMD)와 컨트롤러만 사용한다. 사용자의 요구가 높아지면서 현실적인 컨텐츠 구현을 위해서 사용자의 신체 움직임으로 제어하는 풀 바디 트래킹(Full Body Tracking) 기술이 도입되고 있다. 머리에 착용하는 HMD와 양손으로 제어하는 두 개의 컨트롤러 뿐만 아니라 모션캡쳐장비, 트래커 장비를 사용자의 다양한 위치에 착용시켜, 세밀한 움직임 트래킹이 가능해졌다. 본 연구에서 서브 모션 기반의 움직임 추적 방법과 이를 기반한 서브모션 시스템을 제안한다. 서브모션 시스템은 VR 컨텐츠에 사용되는 사용하는 센서 위치를 VR캐릭터의 대응되는 위치에 출력하는 방식이 아닌, 사용자의 움직임에 따라 다양한 센서 위치 변화를 인식하고, 이를 기반으로 VR에서 사전에 지정된 모션을 인식 및 출력한다. 사용자의 움직임을 세분화하여 각각의 연속된 서브모션들로 인식하고, 각각의 서브 모션에서 연속적으로 인식 가능한 서브 모션을 분기를 통해 정의하고 인식함으로써 다양하고 자유도 높은 모션 처리가 가능하다. 선행 기술들의 문제점인 고정된 데미지 방식 및 부자연스러운 모션을 해결하고 사용자에게 실제와 같은 동작을 취하도록 유도하여 몰입감등을 부여할 수 있다. 서브 모션들을 자동적으로 생성하는 시스템을 통해 풀 바디 트래킹 VR 컨텐츠에 적용 가능한 엔진을 연구 및 개발하여 해당 산업의 발전에 이바지하고자 한다.

Low-Cost Hologram Module for Optical Pickup by Adjusting Photodiode Package (포토 다이오드 조정방식을 이용한 광 픽업용 저가 홀로그램 모듈)

  • Jeong, Ho-Seop;Kyong, Chon-Su
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.4
    • /
    • pp.345-353
    • /
    • 2005
  • We proposed a new and cost-effective method fer assembling holographic pickup modules without any high resolution vision system. Assembling was accomplished by adjusting photodiode package only, leading to a low cost, holographic pickup module. Focus and tracking error signals were simply determined by comparing spot sizes and by using the 3 beam method, respectively, based on four-sectional holographic optical elements. In experiment, we assembled a hologram module and estimated performance of the proposed method fur a holographic pickup module used in compact disc system.

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

Tracking of ARPA Radar Signals Based on UK-PDAF and Fusion with AIS Data

  • Chan Woo Han;Sung Wook Lee;Eun Seok Jin
    • Journal of Ocean Engineering and Technology
    • /
    • v.37 no.1
    • /
    • pp.38-48
    • /
    • 2023
  • To maintain the existing systems of ships and introduce autonomous operation technology, it is necessary to improve situational awareness through the sensor fusion of the automatic identification system (AIS) and automatic radar plotting aid (ARPA), which are installed sensors. This study proposes an algorithm for determining whether AIS and ARPA signals are sent to the same ship in real time. To minimize the number of errors caused by the time series and abnormal phenomena of heterogeneous signals, a tracking method based on the combination of the unscented Kalman filter and probabilistic data association filter is performed on ARPA radar signals, and a position prediction method is applied to AIS signals. Especially, the proposed algorithm determines whether the signal is for the same vessel by comparing motion-related components among data of heterogeneous signals to which the corresponding method is applied. Finally, a measurement test is conducted on a training ship. In this process, the proposed algorithm is validated using the AIS and ARPA signal data received by the voyage data recorder for the same ship. In addition, the proposed algorithm is verified by comparing the test results with those obtained from raw data. Therefore, it is recommended to use a sensor fusion algorithm that considers the characteristics of sensors to improve the situational awareness accuracy of existing ship systems.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

A New Face Tracking Method Using Block Difference Image and Kalman Filter in Moving Picture (동영상에서 칼만 예측기와 블록 차영상을 이용한 얼굴영역 검출기법)

  • Jang, Hee-Jun;Ko, Hye-Sun;Choi, Young-Woo;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2005
  • When tracking a human face in the moving pictures with complex background under irregular lighting conditions, the detected face can be larger including background or smaller including only a part of the face. Even background can be detected as a face area. To solve these problems, this paper proposes a new face tracking method using a block difference image and a Kalman estimator. The block difference image allows us to detect even a small motion of a human and the face area is selected using the skin color inside the detected motion area. If the pixels with skin color inside the detected motion area, the boundary of the area is represented by a code sequence using the 8-neighbor window and the head area is detected analysing this code. The pixels in the head area is segmented by colors and the region most similar with the skin color is considered as a face area. The detected face area is represented by a rectangle including the area and its four vertices are used as the states of the Kalman estimator to trace the motion of the face area. It is proved by the experiments that the proposed method increases the accuracy of face detection and reduces the fare detection time significantly.

Immersive user interfaces for visual telepresence in human-robot interaction (사람과 로봇간 원격작동을 위한 몰입형 사용자 인터페이스)

  • Jang, Su-Hyeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.406-410
    • /
    • 2009
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF