• Title/Summary/Keyword: Invariant Feature

Search Result 432, Processing Time 0.027 seconds

Definition and Analysis of Shadow Features for Shadow Detection in Single Natural Image (단일 자연 영상에서 그림자 검출을 위한 그림자 특징 요소들의 정의와 분석)

  • Park, Ki Hong;Lee, Yang Sun
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.165-171
    • /
    • 2018
  • Shadow is a physical phenomenon observed in natural scenes and has a negative effect on various image processing systems such as intelligent video surveillance, traffic surveillance and aerial imagery analysis. Therefore, shadow detection should be considered as a preprocessing process in all areas of computer vision. In this paper, we define and analyze various feature elements for shadow detection in a single natural image that does not require a reference image. The shadow elements describe the intensity, chromaticity, illuminant-invariant, color invariance, and entropy image, which indicate the uncertainty of the information. The results show that the chromaticity and illuminant-invariant images are effective for shadow detection. In the future, we will define a fusion map of various shadow feature elements, and continue to study shadow detection that can adapt to various lighting levels, and shadow removal using chromaticity and illuminance invariant images.

A Study on Automatic Coregistration and Band Selection of Hyperion Hyperspectral Images for Change Detection (변화탐지를 위한 Hyperion 초분광 영상의 자동 기하보정과 밴드선택에 관한 연구)

  • Kim, Dae-Sung;Kim, Yong-Il;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.383-392
    • /
    • 2007
  • This study focuses on co-registration and band selection, which are one of the pre-processing steps to apply the change detection technique using hyperspectral images. We carried out automatic co-registration by using the SIFT algorithm which performance was already established in the computer vision fields, and selected the bands fur change detection by estimating the noise of image through the PIFs reflecting the radiometric consistency. The EM algorithm was also applied to select the band objectively. Hyperion images were used for the proposed techniques, and non-calibrated bands and striping noises contained in Hyperion image were removed. Throughout the results, we could develop the reliable co-registration procedure which coincided with accuracy within 0.2 pixels (RMSE) for change detection, and verified that band selection depending on the visual inspection could be objective by extracting the PIFs.

Rotation and Translation Invariant Feature Extraction Using Angular Projection in Frequency Domain (주파수 영역에서 각도 투영법을 이용한 회전 및 천이 불변 특징추출)

  • Lee, Bum-Shik;Kim, Mun-Churl
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.699-704
    • /
    • 2006
  • 본 논문은 회전 및 천이불변 이미지 텍스처 검색의 새로운 방식을 소개한다. 주파수 영역의 극좌표계에서 동일한 공간 주파수에서 각도 방향으로 투영을 하는 각도 투영법을 제안하며, 제안된 각도 투영법을 이용하여 주파수 영역에서 푸리에 계수이 합과 표준편차를 특징벡터로 이용한다. 각도 투영법을 쉽게 구현하기 위해 극좌표계에서 라돈변환이 수행된다. 실험 시 MPEG-7 데이터를 이용하였으며 그 결과는 여러 텍스처 이미지를 검색하는데 있어서 특징을 잘 구별해 내는 결과를 보여준다. 또한 제안된 회전 및 천이불변 특징 추출 알고리듬은 등방성 텍스처나 국부적인 방향성을 보이는 텍스처 영상 검색에도 효율적인 검색률을 보인다.

  • PDF

Fast Computation of Zernike Moments Using Three Look-up Tables

  • Kim, Sun-Gi;Kim, Whoi-Yul;Kim, Young-Sum;Park, Chee-Hang
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.156-161
    • /
    • 1997
  • Zernike moments have been one of the most commonly used feature vectors for recognizing rotated patterns due to its rotation invariant characteristics. In order to reduce its expensive computational cost, several methods have been proposed to lower the complexity. One of the methods proposed by mukundan and K. R. Ramakrishnan[1], however, is not rotation invariant. In this paper, we propose another method that not only reduces the computational cost but preserves the rotation invariant characteristics. In the experiment, we compare our method with others, in terms of computing time and the accuracy of moment feature at different rotational angle of an object in image.

  • PDF

Improvement of ASIFT for Object Matching Based on Optimized Random Sampling

  • Phan, Dung;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.9 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • This paper proposes an efficient matching algorithm based on ASIFT (Affine Scale-Invariant Feature Transform) which is fully invariant to affine transformation. In our approach, we proposed a method of reducing similar measure matching cost and the number of outliers. First, we combined the Manhattan and Chessboard metrics replacing the Euclidean metric by a linear combination for measuring the similarity of keypoints. These two metrics are simple but really efficient. Using our method the computation time for matching step was saved and also the number of correct matches was increased. By applying an Optimized Random Sampling Algorithm (ORSA), we can remove most of the outlier matches to make the result meaningful. This method was experimented on various combinations of affine transform. The experimental result shows that our method is superior to SIFT and ASIFT.

Viewpoint Unconstrained Face Recognition Based on Affine Local Descriptors and Probabilistic Similarity

  • Gao, Yongbin;Lee, Hyo Jong
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.643-654
    • /
    • 2015
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

Linear Regression-based 1D Invariant Image for Shadow Detection and Removal in Single Natural Image (단일 자연 영상에서 그림자 검출 및 제거를 위한 선형 회귀 기반의 1D 불변 영상)

  • Park, Ki-Hong
    • Journal of Digital Contents Society
    • /
    • v.19 no.9
    • /
    • pp.1787-1793
    • /
    • 2018
  • Shadow is a common phenomenon observed in natural scenes, but it has a negative influence on image analysis such as object recognition, feature detection and scene analysis. Therefore, the process of detecting and removing shadows included in digital images must be considered as a pre-processing process of image analysis. In this paper, the existing methods for acquiring 1D invariant images, one of the feature elements for detecting and removing shadows contained in a single natural image, are described, and a method for obtaining 1D invariant images based on linear regression has been proposed. The proposed method calculates the log of the band-ratio between each channel of the RGB color image, and obtains the grayscale image line by linear regression. The final 1D invariant images were obtained by projecting the log image of the band-ratio onto the estimated grayscale image line. Experimental results show that the proposed method has lower computational complexity than the existing projection method using entropy minimization, and shadow detection and removal based on 1D invariant images are performed effectively.

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

Constructing 3D Outlines of Objects based on Feature Points using Monocular Camera (단일카메라를 사용한 특징점 기반 물체 3차원 윤곽선 구성)

  • Park, Sang-Heon;Lee, Jeong-Oog;Baik, Doo-Kwon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.429-436
    • /
    • 2010
  • This paper presents a method to extract 3D outlines of objects in an image obtained from a monocular vision. After detecting the general outlines of the object by MOPS(Multi-Scale Oriented Patches) -algorithm and we obtain their spatial coordinates. Simultaneously, it obtains the space-coordinates with feature points to be immanent within the outlines of objects through SIFT(Scale Invariant Feature Transform)-algorithm. It grasps a form of objects to join the space-coordinates of outlines and SIFT feature points. The method which is proposed in this paper, it forms general outlines of objects, so that it enables a rapid calculation, and also it has the advantage capable of collecting a detailed data because it supplies the internal-data of outlines through SIFT feature points.

Classification of Feature Points Required for Multi-Frame Based Building Recognition (멀티 프레임 기반 건물 인식에 필요한 특징점 분류)

  • Park, Si-young;An, Ha-eun;Lee, Gyu-cheol;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.317-327
    • /
    • 2016
  • The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.