• Title/Summary/Keyword: Scale-Invariant Features

Search Result 116, Processing Time 0.024 seconds

3D Object Recognition Using Appearance Model Space of Feature Point (특징점 Appearance Model Space를 이용한 3차원 물체 인식)

  • Joo, Seong Moon;Lee, Chil Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.2
    • /
    • pp.93-100
    • /
    • 2014
  • 3D object recognition using only 2D images is a difficult work because each images are generated different to according to the view direction of cameras. Because SIFT algorithm defines the local features of the projected images, recognition result is particularly limited in case of input images with strong perspective transformation. In this paper, we propose the object recognition method that improves SIFT algorithm by using several sequential images captured from rotating 3D object around a rotation axis. We use the geometric relationship between adjacent images and merge several images into a generated feature space during recognizing object. To clarify effectiveness of the proposed algorithm, we keep constantly the camera position and illumination conditions. This method can recognize the appearance of 3D objects that previous approach can not recognize with usually SIFT algorithm.

Evaluation of Marker Images based on Analysis of Feature Points for Effective Augmented Reality (효과적인 증강현실 구현을 위한 특징점 분석 기반의 마커영상 평가 방법)

  • Lee, Jin-Young;Kim, Jongho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.49-55
    • /
    • 2019
  • This paper presents a marker image evaluation method based on analysis of object distribution in images and classification of images with repetitive patterns for effective marker-based augmented reality (AR) system development. We measure the variance of feature point coordinates to distinguish marker images that are vulnerable to occlusion, since object distribution affects object tracking performance according to partial occlusion in the images. Moreover, we propose a method to classify images suitable for object recognition and tracking based on the fact that the distributions of descriptor vectors among general images and repetitive-pattern images are significantly different. Comprehensive experiments for marker images confirm that the proposed marker image evaluation method distinguishes images vulnerable to occlusion and repetitive-pattern images very well. Furthermore, we suggest that scale-invariant feature transform (SIFT) is superior to speeded up robust features (SURF) in terms of object tracking in marker images. The proposed method provides users with suitability information for various images, and it helps AR systems to be realized more effectively.

Recognition of Events by Human Motion for Context-aware Computing (상황인식 컴퓨팅을 위한 사람 움직임 이벤트 인식)

  • Cui, Yao-Huan;Shin, Seong-Yoon;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.47-57
    • /
    • 2009
  • Event detection and recognition is an active and challenging topic recent in Computer Vision. This paper describes a new method for recognizing events caused by human motion from video sequences in an office environment. The proposed approach analyzes human motions using Motion History Image (MHI) sequences, and is invariant to body shapes. types or colors of clothes and positions of target objects. The proposed method has two advantages; one is thant the proposed method is less sensitive to illumination changes comparing with the method using color information of objects of interest, and the other is scale invariance comparing with the method using a prior knowledge like appearances or shapes of objects of interest. Combined with edge detection, geometrical characteristics of the human shape in the MHI sequences are considered as the features. An advantage of the proposed method is that the event detection framework is easy to extend by inserting the descriptions of events. In addition, the proposed method is the core technology for event detection systems based on context-aware computing as well as surveillance systems based on computer vision techniques.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF

Feature-based Non-rigid Registration between Pre- and Post-Contrast Lung CT Images (조영 전후의 폐 CT 영상 정합을 위한 특징 기반의 비강체 정합 기법)

  • Lee, Hyun-Joon;Hong, Young-Taek;Shim, Hack-Joon;Kwon, Dong-Jin;Yun, Il-Dong;Lee, Sang-Uk;Kim, Nam-Kug;Seo, Joon-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.237-244
    • /
    • 2011
  • In this paper, a feature-based registration technique is proposed for pre-contrast and post-contrast lung CT images. It utilizes three dimensional(3-D) features with their descriptors and estimates feature correspondences by nearest neighborhood matching in the feature space. We design a transformation model between the input image pairs using a free form deformation(FFD) which is based on B-splines. Registration is achieved by minimizing an energy function incorporating the smoothness of FFD and the correspondence information through a non-linear gradient conjugate method. To deal with outliers in feature matching, our energy model integrates a robust estimator which discards outliers effectively by iteratively reducing a radius of confidence in the minimization process. Performance evaluation was carried out in terms of accuracy and efficiency using seven pairs of lung CT images of clinical practice. For a quantitative assessment, a radiologist specialized in thorax manually placed landmarks on each CT image pair. In comparative evaluation to a conventional feature-based registration method, our algorithm showed improved performances in both accuracy and efficiency.