• Title/Summary/Keyword: Feature Point Tracking

Search Result 94, Processing Time 0.025 seconds

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Real-time Control of Biological Animal Wastewater Treatment Process and Stability of Control Parameters (생물학적 축산폐수 처리공정의 자동제어 방법 및 제어 인자의 안정성)

  • Kim, W.Y.;Jung, J.H.;Ra, C.S.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.2
    • /
    • pp.251-260
    • /
    • 2004
  • The feasibility and stability of ORP, pH(mV) and DO as a real-time control parameter for SBR process were evaluated in this study. During operation, NBP(nitrogen break point) and NKP(nitrate knee point), which reveal the biological and chemical changes of pollutants, were clearly observed on ORP and pH(mV)-time profiles, and those control points were easily detected by tracking the moving slope changes(MSC). However, when balance of aeration rate to loading rate, or to OUR(oxygen uptake rate), was not optimally maintained, either false NBP was occurred on ORP and DO curves before the appearance of real NBP or specific NBP feature was disappeared on ORP curve. Under that condition, however, very distinct NBP was found on pH(mV)-time profile, and stable detection of that point was feasible by tracking MSC. These results might mean that pH(mV) is superior real-time control parameter for aerobic process than ORP and DO. Meanwhile, as a real-time control parameter for anoxic process, ORP was very stable and more useful parameter than others. Based on these results, a stable real-time control of process can be achieved by using the ORP and pH(mv) parameters in combination rather than using separately. A complete removal of pollutants could be always ensured with this real-time control technology, despite the variations of wastewater and operation condition, as well as an optimization of treatment time and capacity could be feasible.

Vision-Based Robust Control of Robot Manipulators with Jacobian Uncertainty (자코비안 불확실성을 포함하는 로봇 매니퓰레이터의 영상기반 강인제어)

  • Kim, Chin-Su;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2006
  • In this paper, a vision-based robust controller for tracking the desired trajectory a robot manipulator is proposed. The trajectory is generated to move the feature point into the desired position which the robot follows to reach to the desired position. To compensate the parametric uncertainties of the robot manipulator which contain in the control input, the robust controller is proposed. In addition, if there are uncertainties in the Jacobian, to compensate it, a vision-based robust controller which has control input is proposed as well in this paper. The stability of the closed-loop system is shown by Lyapunov method. The performance of the proposed method is demonstrated by simulations and experiments on a two degree of freedom 5-link robot manipulators.

  • PDF

Augmented Reality Game Interface Using Hand Gestures Tracking (사용자 손동작 추적에 기반한 증강현실 게임 인터페이스)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.3-12
    • /
    • 2006
  • Recently, Many 3D augmented reality games that provide strengthened immersive have appeared in the 3D game environment. In this article, we describe a barehanded interaction method based on human hand gestures for augmented reality games. First, feature points are extracted from input video streams. Point features are tracked and motion of moving objects are computed. The shape of the motion trajectories are used to determine whether the motion is intended gestures. A long smooth trajectory toward one of virtual objects or menus is classified as an intended gesture and the corresponding action is invoked. To prove the validity of the proposed method, we implemented two simple augmented reality applications: a gesture-based music player and a virtual basketball game. In the music player, several menu icons are displayed on the top of the screen and an user can activate a menu by hand gestures. In the virtual basketball game, a virtual ball is bouncing in a virtual cube space and the real video stream is shown in the background. An user can hit the virtual ball with his hand gestures. From the experiments for three untrained users, it is shown that the accuracy of menu activation according to the intended gestures is 94% for normal speed gestures and 84% for fast and abrupt gestures.

  • PDF

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

Histogram Based Hand Recognition System for Augmented Reality (증강현실을 위한 히스토그램 기반의 손 인식 시스템)

  • Ko, Min-Su;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.7
    • /
    • pp.1564-1572
    • /
    • 2011
  • In this paper, we propose a new histogram based hand recognition algorithm for augmented reality. Hand recognition system makes it possible a useful interaction between an user and computer. However, there is difficulty in vision-based hand gesture recognition with viewing angle dependency due to the complexity of human hand shape. A new hand recognition system proposed in this paper is based on the features from hand geometry. The proposed recognition system consists of two steps. In the first step, hand region is extracted from the image captured by a camera and then hand gestures are recognized in the second step. At first, we extract hand region by deleting background and using skin color information. Then we recognize hand shape by determining hand feature point using histogram of the obtained hand region. Finally, we design a augmented reality system by controlling a 3D object with the recognized hand gesture. Experimental results show that the proposed algorithm gives more than 91% accuracy for the hand recognition with less computational power.

Construction of the position control system by a Neural network 2-DOF PID controller (신경망 2자유도 PID저어기에 의한 위치제어시스템 구성)

  • 이정민;허진영;하홍곤;고태언
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2000.05a
    • /
    • pp.378-385
    • /
    • 2000
  • In this paper, we consider to apply of 2-DOF (Degree of Freedom) PID controller at D.C servo motor system. Many control system use I-PD , PID control system. but the position control system have difficulty in controling variable load and changing parameter. We propose neural network 2-DOF PID control system having feature for removal disturbrances and tracking function in the target value point. The back propagation algorithm of neural network used for tuning the 2-DOF parameter(${\alpha}$,${\beta}$,${\gamma}$,η). We investigate the 2-DOF PID control system in the position control system and verify the effectiveness of proposal method through the result of computer simulation.

  • PDF

Moving Mass Actuated Reentry Vehicle Control Based on Trajectory Linearization

  • Su, Xiao-Long;Yu, Jian-Qiao;Wang, Ya-Fei;Wang, Lin-lin
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.3
    • /
    • pp.247-255
    • /
    • 2013
  • The flight control of re-entry vehicles poses a challenge to conventional gain-scheduled flight controllers due to the widely spread aerodynamic coefficients. In addition, a wide range of uncertainties in disturbances must be accommodated by the control system. This paper presents the design of a roll channel controller for a non-axisymmetric reentry vehicle model using the trajectory linearization control (TLC) method. The dynamic equations of a moving mass system and roll control model are established using the Lagrange method. Nonlinear tracking and decoupling control by trajectory linearization can be viewed as the ideal gain-scheduling controller designed at every point along the flight trajectory. It provides robust stability and performance at all stages of the flight without adjusting controller gains. It is this "plug-and-play" feature that is highly preferred for developing, testing and routine operating of the re-entry vehicles. Although the controller is designed only for nominal aerodynamic coefficients, excellent performance is verified by simulation for wind disturbances and variations from -30% to +30% of the aerodynamic coefficients.

Spatial Information Search Features Shown in Eye Fixations and Saccades (시선의 고정과 도약에 나타난 공간정보 탐색 특성)

  • Kim, Jong-Ha
    • Korean Institute of Interior Design Journal
    • /
    • v.26 no.2
    • /
    • pp.22-32
    • /
    • 2017
  • This research is to analyze the spatial information search features which shown by Eye fixation and movement and conducted eye tracking experiment for targeting sports shop spatial images which it are same but looks different. This is able to find out the eye movement feature according to placement of goods from the eye movement and movement distance of spatial visitor, and the result can be defined as following. First, the whole original-reverse left / right images have a higher number of observations in the [IN] area than in the [OUT] area. This is because after eye taking high observations in LA area of [IN] have been jump-over [OUT], performed search activities in low eye fixation without high eye fixation. Second, there was a difference in the frequency of the observation data as the composition of the images changed. The original image has been often fixed the eyes in LA area, and the one that has been observed for a long time is reverse left / right image. Also, fixation point was shown higher at the reverse left / right image as jump-over from [OUT] area to [IN] area. If LA area seen as reverse left / right image, it is located in right-hand side. The case where the dominant area is on the right side has a characteristic that the eye fixation is longer. This can be understand that the arrangement of products for attract the customer's attention in the commercial space might be more effective when it is on the right side. Third, the moving distance(IN ${\rightarrow}$ OUT) of the sight pointed to external from LA area was long in the both original-reverse left / right images, but it is no relation with search direction([IN${\rightarrow}$OUT] [IN${\rightarrow}$OUT]) of the sight. In other words, the sight that entered in LA area can be seen as visual perception activity for re-searching after big jump-over, in the case go in to outward (OUT area) after searching for more than certain time. The fact that the moving distance of eye is relatively short in the [IN ${\rightarrow}$ OUT] process considered as that the gaze that stays outside the LA area naturally enters in to LA area.