• Title/Summary/Keyword: Visual tracking

Search Result 527, Processing Time 0.022 seconds

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

Face Tracking and Recognition in Video with PCA-based Pose-Classification and (2D)2PCA recognition algorithm (비디오속의 얼굴추적 및 PCA기반 얼굴포즈분류와 (2D)2PCA를 이용한 얼굴인식)

  • Kim, Jin-Yul;Kim, Yong-Seok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.423-430
    • /
    • 2013
  • In typical face recognition systems, the frontal view of face is preferred to reduce the complexity of the recognition. Thus individuals may be required to stare into the camera, or the camera should be located so that the frontal images are acquired easily. However these constraints severely restrict the adoption of face recognition to wide applications. To alleviate this problem, in this paper, we address the problem of tracking and recognizing faces in video captured with no environmental control. The face tracker extracts a sequence of the angle/size normalized face images using IVT (Incremental Visual Tracking) algorithm that is known to be robust to changes in appearance. Since no constraints have been imposed between the face direction and the video camera, there will be various poses in face images. Thus the pose is identified using a PCA (Principal Component Analysis)-based pose classifier, and only the pose-matched face images are used to identify person against the pre-built face DB with 5-poses. For face recognition, PCA, (2D)PCA, and $(2D)^2PCA$ algorithms have been tested to compute the recognition rate and the execution time.

Real-time Monocular Camera Pose Estimation using a Particle Filiter Intergrated with UKF (UKF와 연동된 입자필터를 이용한 실시간 단안시 카메라 추적 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.315-324
    • /
    • 2023
  • In this paper, we propose a real-time pose estimation method for a monocular camera using a particle filter integrated with UKF (unscented Kalman filter). While conventional camera tracking techniques combine camera images with data from additional devices such as gyroscopes and accelerometers, the proposed method aims to use only two-dimensional visual information from the camera without additional sensors. This leads to a significant simplification in the hardware configuration. The proposed approach is based on a particle filter integrated with UKF. The pose of the camera is estimated using UKF, which is defined individually for each particle. Statistics regarding the camera state are derived from all particles of the particle filter, from which the real-time camera pose information is computed. The proposed method demonstrates robust tracking, even in the case of rapid camera shakes and severe scene occlusions. The experiments show that our method remains robust even when most of the feature points in the image are obscured. In addition, we verify that when the number of particles is 35, the processing time per frame is approximately 25ms, which confirms that there are no issues with real-time processing.

Elementary Teacher's Science Class Analysis using Mobile Eye Tracker (이동형 시선추적기를 활용한 초등교사의 과학 수업 분석)

  • Shin, Won-Sub;Kim, Jang-Hwan;Shin, Dong-Hoon
    • Journal of Korean Elementary Science Education
    • /
    • v.36 no.4
    • /
    • pp.303-315
    • /
    • 2017
  • The purpose of this study is to analyze elementary teachers' science class objectively and quantitatively using Mobile Eye Tracker. The mobile eye tracker is easy to wear in eyeglasses form. And experiments are collected in video form, so it is very useful for realizing objective data of teacher's class situation in real time. Participants in the study were 2 elementary teachers, and they are teaching sixth grade science in Seoul. Participants took a 40-minute class wearing a mobile eye tracker. Eye movements of participants were collected at 60 Hz, and the collected eye movement data were analyzed using SMI BeGaze 3.7. In this study, the area related to the class was set as the area of interest, we analyzed the visual occupancy of teachers. In addition, we analyzed the linguistic interaction between teacher and students. The results of the study are as follows. First, we analyze the visual occupancy of meaningful areas in teaching-learning activities by class stage. Second, the analysis of eye movements when teachers interacted with students showed that teacher A had a high percentage of students' faces, while teacher B had a high visual occupation in areas not related to classes. Third, the linguistic interaction of the participants were analyzed. Analysis areas include questions, attention-focused language, elementary science teaching terminology, daily interaction, humor, and unnecessary words. This study shows that it is possible to analyze elementary science class objectively and quantitatively through analysis of visual occupancy using mobile eye tracking. In addition, it is expected that teachers' visual attention in teaching activities can be used as an index to analyze the form of language interaction.

A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development (한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구)

  • Kim, Jung-Hwan;Hong, Seung-Kweon;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.

Analysis of the User's Visual Attention Frequency for UX Design of Online Markets (온라인마켓의 UX설계를 위한 사용자의 주시빈도 분석)

  • Ha, JongSoo;Ban, ChaeHoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2079-2084
    • /
    • 2016
  • When designing the interface of the web site, we must consider what users want and need in the design, because design trends are recently changing from user interface design to user experience design. In this paper, we present the method to apply the user experience design when designing the main page of the web site. We select representative online markets and analyze the information elements which compose the main page. Using the eye tracking method which measure user's visual attention frequency, we compare the main page with the wire-frame and analyze changes of the visual attention frequency over time. As a result of experiments, it is efficient to allocate the important information as the image or the moving image to the center of main page for use's visual attention at the time of designing the main page of the online market.

Implementation of an Intelligent Visual Surveillance System Based on Embedded System (임베디드 시스템 기반 지능형 영상 감시 시스템 구현)

  • Song, Jae-Min;Kim, Dong-Jin;Jung, Yong-Bae;Park, Young-Seak;Kim, Tae-Hyo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.2
    • /
    • pp.83-90
    • /
    • 2012
  • In this paper, an intelligent visual surveillance system based on a NIOS II embedded platform is implemented. By this time, embedded based visual surveillance systems were restricted for a special purpose because of high dependence upon hardware. In order to improve the restriction, we implement a flexible embedded platform, which is available for various purpose of applications. For high speed processing of software based programming, we improved performance of the system which is integrated the SOPC type of NIOS II embedded processor and image processing algorithms by using software programming and C2H(The Altera NIOS II C-To-Hardware(C2H) Acceleration Compiler) compiler in the core of the hardware platform. Then, we constructed a server system which globally manage some devices by the NIOS II embedded processor platform, and included the control function on networks to increase efficiency for user. We tested and evaluated our system at the designated region for visual surveillance.

A Study on the Priorities of Urban Street Environment Components - Focusing on An Analysis of AOI (Area of Interest) Setup through An Eye-tracking Experiment - (도시가로환경 구성요소의 우선순위에 관한 연구 - 아이트래킹 실험을 통한 관심영역설정 분석을 중심으로 -)

  • Lee, Sun Hwa;Lee, Chang No
    • Korean Institute of Interior Design Journal
    • /
    • v.25 no.1
    • /
    • pp.73-80
    • /
    • 2016
  • Street is the most fundamental component of city and place to promote diverse actions of people. Pedestrians gaze at various street environments. A visual gaze means that there are interesting elements and these elements need to be preferentially improved in the street environment improvement project. Therefore, this study aims to set up the priorities of street environment components by analyzing eye movements from a pedestrian perspective. In this study, street environment components were classified into road, street facility, building(facade) and sky and as street environment images, three "Streets of Youth" situated in Gwangbok-ro, Seomyeon and Busan University of Busan were selected. The experiment targeted 30 males and females in their twenties to forties. After setting the angle of sight through a calibration test, an eye-tracking experiment regarding the three images was conducted. Lastly, the subjects were asked to fill in questionnaires. The following three conclusions were obtained from the results of the eye-tracking experiment and the survey. First, building was the top priority of street environment components and it was followed by street facility, road and sky. Second, as components to be regarded as important, fast 'Sequence', many 'Fixation Counts' and 'Visit Counts', short 'Time to First Fixation' and long 'Fixation Duration' and 'Visit Duration' were preferred. Third, after voluntary eye movements, the subjects recognized the objects with the highest gaze frequency and the lowest gaze frequency.

Features of Attention to Space Structure of Spacial Composition in Women's Shop - Targeting the Circulation Line of Department Store - (여성의류 매장 공간의 구도에 나타난 공간구성의 주의집중 특성 - 백화점 매장의 순회동선을 대상으로 -)

  • Choi, Gae-Young;Son, Kwang-Ho
    • Korean Institute of Interior Design Journal
    • /
    • v.26 no.2
    • /
    • pp.3-12
    • /
    • 2017
  • This study has analyzed the features of attention to spacial composition seen in "Seeing ${\leftrightarrow}$ Seen" Correlation of continuous move in the space. The eye-tracking was employed for collecting the data of attention features to the space so that the correlation between visual perception and space could be estimated through the attention features to the difference between spacial composition and display. First, it was confirmed that the attention features varied according to the structure of shops and the exposure degree of selling space, which revealed that, while causing the customers' less attention to both sides of shops, the vanishing-point structure characteristically made their eyes focused on the central part. Second, their initial observation activities were found to be active at the height of their eyes. Third, 10 images were selected as objects for continuous experiment. There was a concern that the central part of each image would be paid intense attention to during the initial observation, but only two of those were found to be so. Fourth, there had been a study result of eye-tracking experiment that the attention had been concentrated on the central part of the image first seen. This study, however, revealed that such phenomenon is limited to the first image. Accordingly, it is necessary to draw up such method for ensuring reliability in order to use the data acquired from any eye-tracking experiment as exclusion of the initial attention time to the first image or of unemployment of the initial image-experiment to analysis.

Real-Time Moving Object Tracking System using Advanced Block Based Image Processing (개선된 블록기반 영상처리기법에 의한 실시간 이동물체 추적시스템)

  • Kim, Dohwan;Cheoi, Kyung-Joo;Lee, Yillbyung
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.333-349
    • /
    • 2005
  • In this paper, we propose a real tine moving object tracking system based on block-based image processing technique and human visual processing. The system has two nun features. First, to take advantage of the merit of the biological mechanism of human retina, the system has two cameras, a CCD(Charge-Coupled Device) camera equipped with wide angle lens for more wide scope vision and a Pan-Tilt-Zoon tamers. Second, the system divides the input image into a numbers of blocks and processes coarsely to reduce the rate of tracking error and the processing time. Tn an experiment, the system showed satisfactory performances coping with almost every noisy image, detecting moving objects very int and controlling the Pan-Tilt-Zoom camera precisely.

  • PDF