• 제목/요약/키워드: SIFT features

Search Result 115, Processing Time 0.023 seconds

Adaptive Bayesian Object Tracking with Histograms of Dense Local Image Descriptors

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.104-110
    • /
    • 2016
  • Dense local image descriptors like SIFT are fruitful for capturing salient information about image, shown to be successful in various image-related tasks when formed in bag-of-words representation (i.e., histograms). In this paper we consider to utilize these dense local descriptors in the object tracking problem. A notable aspect of our tracker is that instead of adopting a point estimate for the target model, we account for uncertainty in data noise and model incompleteness by maintaining a distribution over plausible candidate models within the Bayesian framework. The target model is also updated adaptively by the principled Bayesian posterior inference, which admits a closed form within our Dirichlet prior modeling. With empirical evaluations on some video datasets, the proposed method is shown to yield more accurate tracking than baseline histogram-based trackers with the same types of features, often being superior to the appearance-based (visual) trackers.

View invariant image matching using SURF (SURF(speed up robust feature)를 이용한 시점변화에 강인한 영상 매칭)

  • Son, Jong-In;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.222-225
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 중요한 기초 기술 중에 하나이다. 하지만 스케일, 회전, 조명, 시점변화에 강인한 대응점을 찾는 것은 쉬운 작업이 아니다. 이러한 문제점을 보안하기 위해서 스케일 불변 특징 변환(Scale Invariant Feature Transform) 고속의 강인한 특징 추출(Speeded up robust features) 알고리즘등에 제안되었지만, 시점 변화에 있어서 취약한 문제점을 나타냈다. 본 논문에서는 이런 문제점을 해결하기 위해서 시점 변화에 강인한 알고리즘을 제안하였다. 시점 변화에 강인한 영상매칭을 위해서 원본 영상과 질의 영상간 유사도 높은 특징점들의 호모그래피 변환을 이용해서 질의 영상을 원본 영상과 유사하게 보정한 뒤에 매칭을 통해서 시점 변화에 강인한 알고리즘을 구현하였다. 시점이 변화된 여러 영상을 통해서 기존 SIFT,SURF와 성능과 수행 시간을 비교 함으로서, 본 논문에서 제안한 알고리즘의 우수성을 입증 하였다.

  • PDF

A study on localization and compensation of mobile robot using fusion of vision and ultrasound (영상 및 거리정보 융합을 이용한 이동로봇의 위치 인식 및 오차 보정에 관한 연구)

  • Jang, Cheol-Woong;Jung, Ki-Ho;Jung, Dae-Sub;Ryu, Je-Goon;Shim, Jae-Hong;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.554-556
    • /
    • 2006
  • A key component for autonomous mobile robot is to localize ifself. In this paper we suggest a vision-based localization and compensation of robot's location using ultrasound. Mobile robot travels along wall and searches each feature in indoor environment and transformed absolute coordinates of actuality environment using these points and builds a map. And we obtain information of the environment because mobile robot travels along wall. Localzation search robot's location candidate point by ultrasound and decide position among candidate point by features matching.

  • PDF

Object Recognition by Pyramid Matching of Color Cooccurrence Histogram (컬러 동시발생 히스토그램의 피라미드 매칭에 의한 물체 인식)

  • Bang, H.B.;Lee, S.H.;Suh, I.H.;Park, M.K.;Kim, S.H.;Hong, S.K.
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.304-306
    • /
    • 2007
  • Methods of Object recognition from camera image are to compare features of color. edge or pattern with model in a general way. SIFT(scale-invariant feature transform) has good performance but that has high complexity of computation. Using simple color histogram has low complexity. but low performance. In this paper we represent a model as a color cooccurrence histogram. and we improve performance using pyramid matching. The color cooccurrence histogram keeps track of the number of pairs of certain colored pixels that occur at certain separation distances in image space. The color cooccurrence histogram adds geometric information to the normal color histogram. We suggest object recognition by pyramid matching of color cooccurrence histogram.

  • PDF

Illumination invariant image matching using histogram equalization (히스토그램 평활화를 이용한 조명변화에 강인한 영상 매칭)

  • Oh, Changbeom;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.161-164
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 기초적인 기술로써 영상 추적, 물체인식 등 다양한 분양에서 많이 사용되고 있다. 하지만 스케일, 시점변화, 조명 변화에 강인한 매칭점을 찾는 것은 어려운 일이다. 이러한 문제점을 보완하기 위해 SURF(Scale Invariant Feature Transform), SIFT(Speed up Robust Features) 등의 알고리즘이 제안 되었지만, 여전히 조명변화에 불안정하고 정확하지 못한 성능을 보인다. 본 논문에서는 이러한 조명변화에 대한 문제점을 해결하기 위해 히스토그램 평활화를 이용하여 영상을 보정 후, SURF를 통한 영상 매칭을 하였다. 열악한 조명환경 내에서 촬영된 영상에서 SURF를 이용하여 표현자(Descriptor)를 생성 할 때 특징점이 잘 추출되지 않는 문제점을 해결하기 위하여 히스토그램 평활화를 이용하였고, 보정 후 특징점 개수가 많이 증가하는 것을 보여 확인하였다. 기존의 SURF와 개량된 SURF를 조명이 서로 다른 영상간의 매칭 성능을 비교함으로써 제안한 알고리즘의 우수성을 확인하였다

  • PDF

SIFT-Like Pose Tracking with LIDAR using Zero Odometry (이동정보를 배제한 위치추정 알고리즘)

  • Kim, Jee-Soo;Kwak, Nojun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.11
    • /
    • pp.883-887
    • /
    • 2016
  • Navigating an unknown environment is a challenging task for a robot, especially when a large number of obstacles exist and the odometry lacks reliability. Pose tracking allows the robot to determine its location relative to its previous location. The ICP (iterative closest point) has been a powerful method for matching two point clouds and determining the transformation matrix between the maps. However, in a situation where odometry is not available and the robot moves far from its original location, the ICP fails to calculate the exact displacement. In this paper, we suggest a method that is able to match two different point clouds taken a long distance apart. Without using any odometry information, it only exploits the features of corner points containing information on the surroundings. The algorithm is fast enough to run in real time.

A Study on the Sensor Fusion Method to Improve Localization of a Mobile Robot (이동로봇의 위치추정 성능개선을 위한 센서융합기법에 관한 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Kong, Jung-Shik;Jang, Mun-Suk;Kwon, Oh-Sang;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.317-318
    • /
    • 2007
  • One of the important factors of the autonomous mobile robot is to build a map for surround environment and estimate its localization. This paper suggests a sensor fusion method of laser range finder and monocular vision sensor for the simultaneous localization and map building. The robot observes the comer points in the environment as features using the laser range finder, and extracts the SIFT algorithm with the monocular vision sensor. We verify the improved localization performance of the mobile robot from the experiment.

  • PDF

Panoramic Image Stitching using SURF

  • You, Meng;Lim, Jong-Seok;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.1
    • /
    • pp.26-32
    • /
    • 2011
  • This paper proposes a new method to process panoramic image stitching using SURF(Speeded Up Robust Features). Panoramic image stitching is considered a problem of the correspondence matching. In computer vision, it is difficult to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. However, SURF algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform). In this work, we also describe an efficient approach to decreasing computation time through the homography estimation using RANSAC(random sample consensus). RANSAC is a robust estimation procedure that uses a minimal set of randomly sampled correspondences to estimate image transformation parameters. Experimental results show that our method is robust to rotation, zoom, Gaussian noise and illumination change of the input images and computation time is greatly reduced.

Mean-Shift Blob Clustering and Tracking for Traffic Monitoring System

  • Choi, Jae-Young;Yang, Young-Kyu
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.235-243
    • /
    • 2008
  • Object tracking is a common vision task to detect and trace objects between consecutive frames. It is also important for a variety of applications such as surveillance, video based traffic monitoring system, and so on. An efficient moving vehicle clustering and tracking algorithm suitable for traffic monitoring system is proposed in this paper. First, automatic background extraction method is used to get a reliable background as a reference. The moving blob(object) is then separated from the background by mean shift method. Second, the scale invariant feature based method extracts the salient features from the clustered foreground blob. It is robust to change the illumination, scale, and affine shape. The simulation results on various road situations demonstrate good performance achieved by proposed method.

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF