• Title/Summary/Keyword: KLT tracker

Search Result 14, Processing Time 0.017 seconds

Online Multi-view Range Image Registration using Geometric and Photometric Feature Tracking (3차원 기하정보 및 특징점 추적을 이용한 다시점 거리영상의 온라인 정합)

  • Baek, Jae-Won;Moon, Jae-Kyoung;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.493-502
    • /
    • 2007
  • An on-line registration technique is presented to register multi-view range images for the 3D reconstruction of real objects. Using a range camera, we first acquire range images and photometric images continuously. In the range images, we divide object and background regions using a predefined threshold value. For the coarse registration of the range images, the centroid of the images are used. After refining the registration of range images using a projection-based technique, we use a modified KLT(Kanade-Lucas-Tomasi) tracker to match photometric features in the object images. Using the modified KLT tracker, we can track image features fast and accurately. If a range image fails to register, we acquire new range images and try to register them continuously until the registration process resumes. After enough range images are registered, they are integrated into a 3D model in offline step. Experimental results and error analysis show that the proposed method can be used to reconstruct 3D model very fast and accurately.

CG and Photo-Realistic Image Composition in Ocean Scenes (바다영상에서의 CG/실사 합성)

  • Yu, Jung-Jae;Kim, Jae-Hean;Park, Chang-Jun;Lee, In-ho
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.287-288
    • /
    • 2006
  • CG and Photo-realistic image composition in the ocean scenes is frequently used in movies and TV advertisement. But it is very difficult task because it's impossible to use calibration tool in outdoor environment or to use auto-calibration algorithm using natural features like KLT(Kanade Lucas Tomasi feature tracker) from the ocean scene. We propose a simple, effective method for solving camera motion using previous knowledge about background structure. We applied our method to the production of a commercial movie, 'Hanbando' and the result was satisfactory.

  • PDF

A Study on Controlling IPTV Interface Based on Tracking of Face and Eye Positions (얼굴 및 눈 위치 추적을 통한 IPTV 화면 인터페이스 제어에 관한 연구)

  • Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung;Lee, Hee-Kyung;Park, Min-Sik;Lee, Han-Kyu;Hong, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.930-939
    • /
    • 2010
  • Recently, many researches for making more comfortable input device based on gaze detection have been vigorously performed in human computer interaction. However, these previous researches are difficult to be used in IPTV environment because these methods need additional wearing devices or do not work at a distance. To overcome these problems, we propose a new way of controlling IPTV interface by using a detected face and eye positions in single static camera. And although face or eyes are not detected successfully by using Adaboost algorithm, we can control IPTV interface by using motion vectors calculated by pyramidal KLT (Kanade-Lucas-Tomasi) feature tracker. These are two novelties of our research compared to previous works. This research has following advantages. Different from previous research, the proposed method can be used at a distance about 2m. Since the proposed method does not require a user to wear additional equipments, there is no limitation of face movement and it has high convenience. Experimental results showed that the proposed method could be operated at real-time speed of 15 frames per second. Wd confirmed that the previous input device could be sufficiently replaced by the proposed method.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.