• Title/Summary/Keyword: SIFT Features

Search Result 115, Processing Time 0.025 seconds

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.

Image-based Image Retrieval System Using Duplicated Point of PCA-SIFT (PCA-SIFT의 차원 중복점을 이용한 이미지 기반 이미지 검색 시스템)

  • Choi, GiRyong;Jung, Hye-Wuk;Lee, Jee-Hyoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.275-279
    • /
    • 2013
  • Recently, as multimedia information becomes popular, there are many studies to retrieve images based on images in the web. However, it is hard to find the matching images which users want to find because of various patterns in images. In this paper, we suggest an efficient images retrieval system based on images for finding products in internet shopping malls. We extract features for image retrieval by using SIFT (Scale Invariant Feature Transform) algorithm, repeat keypoint matching in various dimension by using PCA-SIFT, and find the image which users search for by combining them. To verify efficiency of the proposed method, we compare the performance of our approach with that of SIFT and PCA-SIFT by using images with various patterns. We verify that the proposed method shows the best distinction in the case that product labels are not included in images.

Recognition of 3D Environment for Intelligent Robots (지능로봇을 위한 3차원 환경인식)

  • Jang, Dae-Sik
    • Journal of Internet Computing and Services
    • /
    • v.7 no.5
    • /
    • pp.135-145
    • /
    • 2006
  • This paper presents a novel approach to real-time recognition of 3D environment and objects for intelligent robots. First. we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds. The experimental results show the feasibility of real-time and behavior-oriented 3D modeling of workspace for robotic manipulative tasks.

  • PDF

Dynamic Stitching Algorithm for 4-channel Surround View System using SIFT Features (SIFT 특징점을 이용한 4채널 서라운드 시스템의 동적 영상 정합 알고리즘)

  • Joongjin Kook;Daewoong Kang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.56-60
    • /
    • 2024
  • In this paper, we propose a SIFT feature-based dynamic stitching algorithm for image calibration and correction of a 360-degree surround view system. The existing surround view system requires a lot of processing time and money because in the process of image calibration and correction. The traditional marker patterns are placed around the vehicle and correction is performed manually. Therefore, in this study, images captured with four fisheye cameras mounted on the surround view system were distorted and then matched with the same feature points in adjacent images through SIFT-based feature point extraction to enable image stitching without a fixed marker pattern.

  • PDF

A Scheme for Matching Satellite Images Using SIFT (SIFT를 이용한 위성사진의 정합기법)

  • Kang, Suk-Chen;Whoang, In-Teck;Choi, Kwang-Nam
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.13-23
    • /
    • 2009
  • In this paper we propose an approach for localizing objects in satellite images. Our method exploits matching features based on description vectors. We applied Scale Invariant Feature Transform (SIFT) to object localization. First, we find keypoints of the satellite images and the objects and generate description vectors of the keypoints. Next, we calculate the similarity between description vectors, and obtain matched keypoints. Finally, we weight the adjacent pixels to the keypoints and determine the location of the matched object. The experiments of object localization by using SIFT show good results on various scale and affine transformed images. In this paper the proposed methods use Google Earth satellite images.

  • PDF

A Comparative Study of Local Features in Face-based Video Retrieval

  • Zhou, Juan;Huang, Lan
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.24-31
    • /
    • 2017
  • Face-based video retrieval has become an active and important branch of intelligent video analysis. Face profiling and matching is a fundamental step and is crucial to the effectiveness of video retrieval. Although many algorithms have been developed for processing static face images, their effectiveness in face-based video retrieval is still unknown, simply because videos have different resolutions, faces vary in scale, and different lighting conditions and angles are used. In this paper, we combined content-based and semantic-based image analysis techniques, and systematically evaluated four mainstream local features to represent face images in the video retrieval task: Harris operators, SIFT and SURF descriptors, and eigenfaces. Results of ten independent runs of 10-fold cross-validation on datasets consisting of TED (Technology Entertainment Design) talk videos showed the effectiveness of our approach, where the SIFT descriptors achieved an average F-score of 0.725 in video retrieval and thus were the most effective, while the SURF descriptors were computed in 0.3 seconds per image on average and were the most efficient in most cases.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

Recognition and Modeling of 3D Environment based on Local Invariant Features (지역적 불변특징 기반의 3차원 환경인식 및 모델링)

  • Jang, Dae-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.31-39
    • /
    • 2006
  • This paper presents a novel approach to real-time recognition of 3D environment and objects for various applications such as intelligent robots, intelligent vehicles, intelligent buildings,..etc. First, we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds.

  • PDF

Automatic Registration Method for EO/IR Satellite Image Using Modified SIFT and Block-Processing (Modified SIFT와 블록프로세싱을 이용한 적외선과 광학 위성영상의 자동정합기법)

  • Lee, Kang-Hoon;Choi, Tae-Sun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.174-181
    • /
    • 2011
  • A new registration method for IR image and EO image is proposed in this paper. IR sensor is applicable to many area because it absorbs thermal radiation energy unlike EO sensor does. However, IR sensor has difficulty to extract and match features due to low contrast compared to EO image. In order to register both images, we used modified SIFT(Scale Invariant Feature Transform) and block processing to increase feature distinctiveness. To remove outlier, we applied RANSAC(RANdom SAample Concensus) for each block. Finally, we unified matching features into single coordinate system and remove outlier again. We used 3~5um range IR image, and our experiment result showed good robustness in registration with IR image.