• Title/Summary/Keyword: Lucas-Kanade

Search Result 58, Processing Time 0.026 seconds

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Comparison of Multi-angle TerraSAR-X Staring Mode Image Registration Method through Coarse to Fine Step (Coarse to Fine 단계를 통한 TerraSAR-X Staring Mode 다중 관측각 영상 정합기법 비교 분석)

  • Lee, Dongjun;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.475-491
    • /
    • 2021
  • With the recent increase in available high-resolution (< ~1 m) satellite SAR images, the demand for precise registration of SAR images is increasing in various fields including change detection. The registration between high-resolution SAR images acquired in different look angle is difficult due to speckle noise and geometric distortion caused by the characteristics of SAR images. In this study, registration is performed in two stages, coarse and fine, using the x-band SAR data imaged at staring spotlight mode of TerraSAR-X. For the coarse registration, a method combining the adaptive sampling method and SAR-SIFT (Scale Invariant Feature Transform) is applied, and three rigid methods (NCC: Normalized Cross Correlation, Phase Congruency-NCC, MI: Mutual Information) and one non-rigid (Gefolki: Geoscience extended Flow Optical Flow Lucas-Kanade Iterative), for the fine registration stage, was performed for performance comparison. The results were compared by using RMSE (Root Mean Square Error) and FSIM (Feature Similarity) index, and all rigid models showed poor results in all image combinations. It is confirmed that the rigid models have a large registration error in the rugged terrain area. As a result of applying the Gefolki algorithm, it was confirmed that the RMSE of Gefolki showed the best result as a 1~3 pixels, and the FSIM index also obtained a higher value than 0.02~0.03 compared to other rigid methods. It was confirmed that the mis-registration due to terrain effect could be sufficiently reduced by the Gefolki algorithm.

Panoramic 3D Reconstruction of an Indoor Scene Using Depth and Color Images Acquired from A Multi-view Camera (다시점 카메라로부터 획득된 깊이 및 컬러 영상을 이용한 실내환경의 파노라믹 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.24-32
    • /
    • 2006
  • 본 논문에서는 다시점 카메라부터 획득된 부분적인 3D 점군을 사용하여 실내환경의 3D 복원을 위한 새로운 방법을 제안한다. 지금까지 다양한 양안차 추정 알고리즘이 제안되었으며, 이는 활용 가능한 깊이 영상이 다양함을 의미한다. 따라서, 본 논문에서는 일반화된 다시점 카메라를 이용하여 실내환경을 복원하는 방법을 다룬다. 첫 번째, 3D 점군들의 시간적 특성을 기반으로 변화량이 큰 3D 점들을 제거하고, 공간적 특성을 기반으로 주변의 3D 점을 참조하여 빈 영역을 채움으로써 깊이 영상 정제 과정을 수행한다. 두 번째, 연속된 두 시점에서의 3D 점군을 동일한 영상 평면으로 투영하고, 수정된 KLT (Kanade-Lucas-Tomasi) 특징 추적기를 사용하여 대응점을 찾는다. 그리고 대응점 간의 거리 오차를 최소화함으로써 정밀한 정합을 수행한다. 마지막으로, 여러 시점에서 획득된 3D 점군과 한 쌍의 2D 영상을 동시에 이용하여 3D 점들의 위치를 세밀하게 조절함으로써 최종적인 3D 모델을 생성한다. 제안된 방법은 대응점을 2D 영상 평면에서 찾음으로써 계산의 복잡도를 줄였으며, 3D 데이터의 정밀도가 낮은 경우에도 효과적으로 동작한다. 또한, 다시점 카메라를 이용함으로써 수 시점에서의 깊이 영상과 컬러 영상만으로도 실내환경 3D 복원이 가능하다. 제안된 방법은 네비게이션 뿐만 아니라 상호작용을 위한 3D 모델 생성에 활용될 수 있다.

  • PDF

Tracking and Face Recognition of Multiple People Based on GMM, LKT and PCA

  • Lee, Won-Oh;Park, Young-Ho;Lee, Eui-Chul;Lee, Hee-Kyung;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.449-471
    • /
    • 2012
  • In intelligent surveillance systems, it is required to robustly track multiple people. Most of the previous studies adopted a Gaussian mixture model (GMM) for discriminating the object from the background. However, it has a weakness that its performance is affected by illumination variations and shadow regions can be merged with the object. And when two foreground objects overlap, the GMM method cannot correctly discriminate the occluded regions. To overcome these problems, we propose a new method of tracking and identifying multiple people. The proposed research is novel in the following three ways compared to previous research: First, the illuminative variations and shadow regions are reduced by an illumination normalization based on the median and inverse filtering of the L*a*b* image. Second, the multiple occluded and overlapped people are tracked by combining the GMM in the still image and the Lucas-Kanade-Tomasi (LKT) method in successive images. Third, with the proposed human tracking and the existing face detection & recognition methods, the tracked multiple people are successfully identified. The experimental results show that the proposed method could track and recognize multiple people with accuracy.

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

Simple Online Multiple Human Tracking based on LK Feature Tracker and Detection for Embedded Surveillance

  • Vu, Quang Dao;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.6
    • /
    • pp.893-910
    • /
    • 2017
  • In this paper, we propose a simple online multiple object (human) tracking method, LKDeep (Lucas-Kanade feature and Detection based Simple Online Multiple Object Tracker), which can run in fast online enough on CPU core only with acceptable tracking performance for embedded surveillance purpose. The proposed LKDeep is a pragmatic hybrid approach which tracks multiple objects (humans) mainly based on LK features but is compensated by detection on periodic times or on necessity times. Compared to other state-of-the-art multiple object tracking methods based on 'Tracking-By-Detection (TBD)' approach, the proposed LKDeep is faster since it does not have to detect object on every frame and it utilizes simple association rule, but it shows a good object tracking performance. Through experiments in comparison with other multiple object tracking (MOT) methods using the public DPM detector among online state-of-the-art MOT methods reported in MOT challenge [1], it is shown that the proposed simple online MOT method, LKDeep runs faster but with good tracking performance for surveillance purpose. It is further observed through single object tracking (SOT) visual tracker benchmark experiment [2] that LKDeep with an optimized deep learning detector can run in online fast with comparable tracking performance to other state-of-the-art SOT methods.

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Error Correction of Interested Points Tracking for Improving Registration Accuracy of Aerial Image Sequences (항공연속영상 등록 정확도 향상을 위한 특징점추적 오류검정)

  • Sukhee, Ochirbat;Yoo, Hwan-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.93-97
    • /
    • 2010
  • This paper presents the improved KLT(Kanade-Lucas-Tomasi) of registration of Image sequence captured by camera mounted on unmanned helicopter assuming without camera attitude information. It consists of following procedures for the proposed image registration. The initial interested points are detected by characteristic curve matching via dynamic programming which has been used for detecting and tracking corner points thorough image sequence. Outliers of tracked points are then removed by using Random Sample And Consensus(RANSAC) robust estimation and all remained corner points are classified as inliers by homography algorithm. The rectified images are then resampled by bilinear interpolation. Experiment shows that our method can make the suitable registration of image sequence with large motion.

A Moving Object Tracking using Color and OpticalFlow Information (컬러 및 광류정보를 이용한 이동물체 추적)

  • Kim, Ju-Hyeon;Choi, Han-Go
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.4
    • /
    • pp.112-118
    • /
    • 2014
  • This paper deals with a color-based tracking of a moving object. Firstly, existing Camshift algorithm is complemented to improve the tracking weakness in the brightness change of an image which occurs in every frame. The complemented Camshift still shows unstable tracking when the objects with same color of the tracking object exist in background. In order to overcome the drawback this paper proposes the Camshift combined with KLT algorithm based on optical flow. The KLT algorithm performing the pixel-based feature tracking can complement the shortcoming of Camshift. Experimental results show that the merged tracking method makes up for the drawback of the Camshit algorithm and also improves tracking performance.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).