• 제목/요약/키워드: vision-based tracking

검색결과 405건 처리시간 0.029초

이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가 (Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot)

  • 박재홍;반욱;최태영;권현일;조동일;김광수
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹 (Hybrid Inertial and Vision-Based Tracking for VR applications)

  • 구재필;안상철;김형곤;김익재;구열회
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제4권4호
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

날씨인식 결과를 이용한 GPS 와 비전센서기반 하이브리드 방식의 태양추적 시스템 개발 (A Hybrid Solar Tracking System using Weather Condition Estimates with a Vision Camera and GPS)

  • 유정재;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.557-562
    • /
    • 2014
  • It is well known that solar tracking systems can increase the efficiency of exiting solar panels significantly. In this paper, a hybrid solar tracking system has been developed by using both astronomical estimates from a GPS and the image processing results of a camera vision system. A decision making process is also proposed to distinguish current weather conditions using camera images. Based on the decision making results, the proposed hybrid tracking system switches two tracking control methods. The one control method is based on astronomical estimates of the current solar position. And the other control method is based on the solar image processing result. The developed hybrid solar tracking system is implemented on an experimental platform and the performance of the developed control methods are verified.

Vision-based Line Tracking and steering control of AGVs

  • Lee, Hyeon-Ho;Lee, Chang-Goo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.180.4-180
    • /
    • 2001
  • This paper describes a vision-based line-tracking system for AGV and steering control scheme. For detect the guideline quickly and exactly, We use four line-points which complement and predict each other. This low-cost line-tracking system is efficiently using PC-based real-time vision processing, Steering control is studied through an steering controller with guide-line angle and line-point error. This method is tested via a typical AGV with a single camera in laboratory environment.

  • PDF

영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법 (Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry)

  • 이용주;김도완;박만우
    • 한국BIM학회 논문집
    • /
    • 제5권2호
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

영상기반항법을 위한 파티클 필터 기반의 특징점 추적 필터 설계 (Particle Filter Based Feature Points Tracking for Vision Based Navigation System)

  • 원대희;성상경;이영재
    • 한국항공우주학회지
    • /
    • 제40권1호
    • /
    • pp.35-42
    • /
    • 2012
  • 본 논문은 영상기반항법에서 특징점의 이동변위가 큰 경우에도 추적 성능을 확보할 수 있는 파티클 필터 기반의 특징점 추적 필터를 설계하였다. 기존 KLT(Kanade-Lucas-Tomasi) 알고리즘에서 이동량이 큰 경우의 추적 성능을 향상시키기 위해 특징점의 동역학 모델을 적용하였고, 불규칙적인 영상정보의 특성을 반영하기 위해 파티클 필터를 사용하였다. 저장된 이미지로 KLT 알고리즘과의 특징점 추적 성능을 비교한 결과 제안한 알고리즘은 큰 이동량을 갖는 경우에도 추적 기능을 유지하는 것을 확인하였다.

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • 제17권6호
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.

시각을 이용한 이동 로봇의 강건한 경로선 추종 주행 (Vision-Based Mobile Robot Navigation by Robust Path Line Tracking)

  • 손민혁;도용태
    • 센서학회지
    • /
    • 제20권3호
    • /
    • pp.178-186
    • /
    • 2011
  • Line tracking is a well defined method of mobile robot navigation. It is simple in concept, technically easy to implement, and already employed in many industrial sites. Among several different line tracking methods, magnetic sensing is widely used in practice. In comparison, vision-based tracking is less popular due mainly to its sensitivity to surrounding conditions such as brightness and floor characteristics although vision is the most powerful robotic sensing capability. In this paper, a vision-based robust path line detection technique is proposed for the navigation of a mobile robot assuming uncontrollable surrounding conditions. The technique proposed has four processing steps; color space transformation, pixel-level line sensing, block-level line sensing, and robot navigation control. This technique effectively uses hue and saturation color values in the line sensing so to be insensitive to the brightness variation. Line finding in block-level makes not only the technique immune from the error of line pixel detection but also the robot control easy. The proposed technique was tested with a real mobile robot and proved its effectiveness.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.