• Title/Summary/Keyword: feature point tracking

Search Result 94, Processing Time 0.03 seconds

Detection of the co-planar feature points in the three dimensional space (3차원 공간에서 동일 평면 상에 존재하는 특징점 검출 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.499-508
    • /
    • 2023
  • In this paper, we propose a technique to estimate the coordinates of feature points existing on a 2D planar object in the three dimensional space. The proposed method detects multiple 3D features from the image, and excludes those which are not located on the plane. The proposed technique estimates the planar homography between the planar object in the 3D space and the camera image plane, and computes back-projection error of each feature point on the planar object. Then any feature points which have large error is considered as off-plane points and are excluded from the feature estimation phase. The proposed method is archived on the basis of the planar homography without any additional sensors or optimization algorithms. In the expretiments, it was confirmed that the speed of the proposed method is more than 40 frames per second. In addition, compared to the RGB-D camera, there was no significant difference in processing speed, and it was verified that the frame rate was unaffected even in the situation that the number of detected feature points continuously increased.

An Efficient Vision-based Object Detection and Tracking using Online Learning

  • Kim, Byung-Gyu;Hong, Gwang-Soo;Kim, Ji-Hae;Choi, Young-Ju
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.285-288
    • /
    • 2017
  • In this paper, we propose a vision-based object detection and tracking system using online learning. The proposed system adopts a feature point-based method for tracking a series of inter-frame movement of a newly detected object, to estimate rapidly and toughness. At the same time, it trains the detector for the object being tracked online. Temporarily using the result of the failure detector to the object, it initializes the tracker back tracks to enable the robust tracking. In particular, it reduced the processing time by improving the method of updating the appearance models of the objects to increase the tracking performance of the system. Using a data set obtained in a variety of settings, we evaluate the performance of the proposed system in terms of processing time.

Moving Object Detection and Tracking Techniques for Error Reduction (오인식률 감소를 위한 이동 물체 검출 및 추적 기법)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.22 no.1
    • /
    • pp.20-26
    • /
    • 2018
  • In this paper, we propose a moving object detection and tracking algorithm based on multi-frame feature point tracking information to reduce false positives. However, there are problems of detection error and tracking speed in existing studies. In order to compensate for this, we first calculate the corner feature points and the optical flow of multiple frames for camera movement compensation and object tracking. Next, the tracking error of the optical flow is reduced by the multi-frame forward-backward tracking, and the traced feature points are divided into the background and the moving object candidate based on homography and RANSAC algorithm for camera movement compensation. Among the transformed corner feature points, the outlier points removed by the RANSAC are clustered and the outlier cluster of a certain size is classified as the moving object candidate. Objects classified as moving object candidates are tracked according to label tracking based data association analysis. In this paper, we prove that the proposed algorithm improves both precision and recall compared with existing algorithms by using quadrotor image - based detection and tracking performance experiments.

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

A Multiple Vehicle Object Detection Algorithm Using Feature Point Matching (특징점 매칭을 이용한 다중 차량 객체 검출 알고리즘)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.1
    • /
    • pp.123-128
    • /
    • 2018
  • In this paper, we propose a multi-vehicle object detection algorithm using feature point matching that tracks efficient vehicle objects. The proposed algorithm extracts the feature points of the vehicle using the FAST algorithm for efficient vehicle object tracking. And True if the feature points are included in the image segmented into the 5X5 region. If the feature point is not included, it is processed as False and the corresponding area is blacked to remove unnecessary object information excluding the vehicle object. Then, the post processed area is set as the maximum search window size of the vehicle. And A minimum search window using the outermost feature points of the vehicle is set. By using the set search window, we compensate the disadvantages of the search window size of mean-shift algorithm and track vehicle object. In order to evaluate the performance of the proposed method, SIFT and SURF algorithms are compared and tested. The result is about four times faster than the SIFT algorithm. And it has the advantage of detecting more efficiently than the process of SUFR algorithm.

Real-time Face Tracking Method using Improved CamShift (향상된 캠쉬프트를 사용한 실시간 얼굴추적 방법)

  • Lee, Jun-Hwan;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.861-877
    • /
    • 2016
  • This paper first discusses the disadvantages of the existing CamShift Algorithm for real time face tracking, and then proposes a new Camshift Algorithm that performs better than the existing algorithm. The existing CamShift Algorithm shows unstable tracking when tracing similar colors in the background of objects. This drawback of the existing CamShift is resolved by using Kinect’s pixel-by-pixel depth information and the Skin Detection algorithm to extract candidate skin regions based on HSV color space. Additionally, even when the tracking object is not found, or when occlusion occurs, the feature point-based matching algorithm makes it robust to occlusion. By applying the improved CamShift algorithm to face tracking, the proposed real-time face tracking algorithm can be applied to various fields. The results from the experiment prove that the proposed algorithm is superior in tracking performance to that of existing TLD tracking algorithm, and offers faster processing speed. Also, while the proposed algorithm has a slower processing speed than CamShift, it overcomes all the existing shortfalls of the existing CamShift.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

Feature Point Filtering Method Based on CS-RANSAC for Efficient Planar Homography Estimating (효과적인 평면 호모그래피 추정을 위한 CS-RANSAC 기반의 특징점 필터링 방법)

  • Kim, Dae-Woo;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.307-312
    • /
    • 2016
  • Markerless tracking for augmented reality using Homography can augment virtual objects correctly and naturally on live view of real-world environment by using correct pose and direction of camera. The RANSAC algorithm is widely used for estimating Homography. CS-RANSAC algorithm is one of the novel algorithm which cooperates a constraint satisfaction problem(CSP) into RANSAC algorithm for increasing accuracy and decreasing processing time. However, CS-RANSAC algorithm can be degraded performance of calculating Homography that is caused by selecting feature points which estimate low accuracy Homography in the sampling step. In this paper, we propose feature point filtering method based on CS-RANSAC for efficient planar Homography estimating the proposed algorithm evaluate which feature points estimate high accuracy Homography for removing unnecessary feature point from the next sampling step using Symmetric Transfer Error to increase accuracy and decrease processing time. To evaluate our proposed method we have compared our algorithm with the bagic CS-RANSAC algorithm, and basic RANSAC algorithm in terms of processing time, error rate(Symmetric Transfer Error), and inlier rate. The experiment shows that the proposed method produces 5% decrease in processing time, 14% decrease in Symmetric Transfer Error, and higher accurate homography by comparing the basic CS-RANSAC algorithm.

Trajectory Generation of a Moving Object for a Mobile Robot in Predictable Environment

  • Jin, Tae-Seok;Lee, Jang-Myung
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.5 no.1
    • /
    • pp.27-35
    • /
    • 2004
  • In the field of machine vision using a single camera mounted on a mobile robot, although the detection and tracking of moving objects from a moving observer, is complex and computationally demanding task. In this paper, we propose a new scheme for a mobile robot to track and capture a moving object using images of a camera. The system consists of the following modules: data acquisition, feature extraction and visual tracking, and trajectory generation. And a single camera is used as visual sensors to capture image sequences of a moving object. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the active camera. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to capture the moving object, the linear and angular velocities are estimated and utilized. The experimental results of tracking and capturing of the target object with the mobile robot are presented.