• Title/Summary/Keyword: SIFT Features

Search Result 115, Processing Time 0.024 seconds

Depth tracking of occluded ships based on SIFT feature matching

  • Yadong Liu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1066-1079
    • /
    • 2023
  • Multi-target tracking based on the detector is a very hot and important research topic in target tracking. It mainly includes two closely related processes, namely target detection and target tracking. Where target detection is responsible for detecting the exact position of the target, while target tracking monitors the temporal and spatial changes of the target. With the improvement of the detector, the tracking performance has reached a new level. The problem that always exists in the research of target tracking is the problem that occurs again after the target is occluded during tracking. Based on this question, this paper proposes a DeepSORT model based on SIFT features to improve ship tracking. Unlike previous feature extraction networks, SIFT algorithm does not require the characteristics of pre-training learning objectives and can be used in ship tracking quickly. At the same time, we improve and test the matching method of our model to find a balance between tracking accuracy and tracking speed. Experiments show that the model can get more ideal results.

3D Object Recognition Using Appearance Model Space of Feature Point (특징점 Appearance Model Space를 이용한 3차원 물체 인식)

  • Joo, Seong Moon;Lee, Chil Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • 3D object recognition using only 2D images is a difficult work because each images are generated different to according to the view direction of cameras. Because SIFT algorithm defines the local features of the projected images, recognition result is particularly limited in case of input images with strong perspective transformation. In this paper, we propose the object recognition method that improves SIFT algorithm by using several sequential images captured from rotating 3D object around a rotation axis. We use the geometric relationship between adjacent images and merge several images into a generated feature space during recognizing object. To clarify effectiveness of the proposed algorithm, we keep constantly the camera position and illumination conditions. This method can recognize the appearance of 3D objects that previous approach can not recognize with usually SIFT algorithm.

Extended SURF Algorithm with Color Invariant Feature and Global Feature (컬러 불변 특징과 광역 특징을 갖는 확장 SURF(Speeded Up Robust Features) 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.58-67
    • /
    • 2009
  • A correspondence matching is one of the important tasks in computer vision, and it is not easy to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. A SURF(Speeded Up Robust Features) algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform) with closely maintaining the matching performance. However, because SURF considers only gray image and local geometric information, it is difficult to match corresponding points on the image where similar local patterns are scattered. In order to solve this problem, this paper proposes an extended SURF algorithm that uses the invariant color and global geometric information. The proposed algorithm can improves the matching performance since the color information and global geometric information is used to discriminate similar patterns. In this paper, the superiority of the proposed algorithm is proved by experiments that it is compared with conventional methods on the image where an illumination and a view point are changed and similar patterns exist.

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

An Image Retrieving Scheme Using Salient Features and Annotation Watermarking

  • Wang, Jenq-Haur;Liu, Chuan-Ming;Syu, Jhih-Siang;Chen, Yen-Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.213-231
    • /
    • 2014
  • Existing image search systems allow users to search images by keywords, or by example images through content-based image retrieval (CBIR). On the other hand, users might learn more relevant textual information about an image from its text captions or surrounding contexts within documents or Web pages. Without such contexts, it's difficult to extract semantic description directly from the image content. In this paper, we propose an annotation watermarking system for users to embed text descriptions, and retrieve more relevant textual information from similar images. First, tags associated with an image are converted by two-dimensional code and embedded into the image by discrete wavelet transform (DWT). Next, for images without annotations, similar images can be obtained by CBIR techniques and embedded annotations can be extracted. Specifically, we use global features such as color ratios and dominant sub-image colors for preliminary filtering. Then, local features such as Scale-Invariant Feature Transform (SIFT) descriptors are extracted for similarity matching. This design can achieve good effectiveness with reasonable processing time in practical systems. Our experimental results showed good accuracy in retrieving similar images and extracting relevant tags from similar images.

Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery (구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록)

  • Han, You Kyung;Kim, Yong Il;Lee, Won Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.101-107
    • /
    • 2013
  • Generally the commercial high-resolution images have their coordinates, but the locations are locally different according to the pose of sensors at the acquisition time and relief displacement of terrain. Therefore, a process of image co-registration has to be applied to use the multi-temporal images together. However, co-registration is interrupted especially when images include the cloud-covered regions because of the difficulties of extracting matching points and lots of false-matched points. This paper proposes an automatic co-registration method for the cloud-covered high-resolution images. A scale-invariant feature transform (SIFT), which is one of the representative feature-based matching method, is used, and only features of the target (cloud-covered) images within a circular buffer from each feature of reference image are used for the candidate of the matching process. Study sites composed of multi-temporal KOMPSAT-2 images including cloud-covered regions were employed to apply the proposed algorithm. The result showed that the proposed method presented a higher correct-match rate than original SIFT method and acceptable registration accuracies in all sites.

Marker Detection by Using Affine-SIFT Matching Points for Marker Occlusion of Augmented Reality (증강현실에서 가려진 마커를 위한 Affine-SIFT 정합 점들을 이용한 마커 검출 기법)

  • Kim, Yong-Min;Park, Chan-Woo;Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.2
    • /
    • pp.55-65
    • /
    • 2011
  • In this paper, a novel method of marker detection robust against marker occlusion in augmented reality is proposed. the proposed method consists of four steps. In the first step, in order to effectively detect an occluded marker, we first utilize the Affine-SIFT (ASIFT, Affine-Scale Invariant Features Transform) for detecting matching points between an enrolled marker and an input images with an occluded marker. In the second step, we apply the Principal Component Analysis (PCA) for eliminating outlier of the matching points in the enrolled marker. And then matching points are projected to the first and second axis for longest value and the shortest value of an ellipse are determined by average distance between the projected points and a center of the points. In the third step, Convex-hull vertices including matching points are considered as polygon vertices for estimating a geometric affine transformation. In the final step, by estimating the geometric affine transformation of the points, a marker robust against a marker occlusion is detected. Experimental results have shown that the proposed method effectively detects occlude markers.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

A Study on Real-Time Localization and Map Building of Mobile Robot using Monocular Camera (단일 카메라를 이용한 이동 로봇의 실시간 위치 추정 및 지도 작성에 관한 연구)

  • Jung, Dae-Seop;Choi, Jong-Hoon;Jang, Chul-Woong;Jang, Mun-Suk;Kong, Jung-Shik;Lee, Eung-Hyuk;Shim, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.536-538
    • /
    • 2006
  • The most important factor of mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from monocular camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was maximum. 8cm and angle error was within $10^{\circ}$.

  • PDF

Pan-sharpening Effect in Spatial Feature Extraction

  • Han, Dong-Yeob;Lee, Hyo-Seong
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.359-367
    • /
    • 2011
  • A suitable pan-sharpening method has to be chosen with respect to the used spectral characteristic of the multispectral bands and the intended application. The research on pan-sharpening algorithm in improving the accuracy of image classification has been reported. For a classification, preserving the spectral information is important. Other applications such as road detection depend on a sharp and detailed display of the scene. Various criteria applied to scenes with different characteristics should be used to compare the pan-sharpening methods. The pan-sharpening methods in our research comprise rather common techniques like Brovey, IHS(Intensity Hue Saturation) transform, and PCA(Principal Component Analysis), and more complex approaches, including wavelet transformation. The extraction of matching pairs was performed through SIFT descriptor and Canny edge detector. The experiments showed that pan-sharpening techniques for spatial enhancement were effective for extracting point and linear features. As a result of the validation it clearly emphasized that a suitable pan-sharpening method has to be chosen with respect to the used spectral characteristic of the multispectral bands and the intended application. In future it is necessary to design hybrid pan-sharpening for the updating of features and land-use class of a map.