• Title/Summary/Keyword: Invariant Feature

Search Result 433, Processing Time 0.035 seconds

Two-Dimensional Shape Description of Objects using The Contour Fluctuation Ratio (윤곽선 변동율을 이용한 물체의 2차원 형태 기술)

  • 김민기
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.2
    • /
    • pp.158-166
    • /
    • 2002
  • In this paper, we proposed a contour shape description method which use the CFR(contour fluctuation ratio) feature. The CFR is the ratio of the line length to the curve length of a contour segment. The line length means the distance of two end points on a contour segment, and the curve length means the sum of distance of all adjacent two points on a contour segment. We should acquire rotation and scale invariant contour segments because each CFR is computed from contour segments. By using the interleaved contour segment of which length is proportion to the entire contour length and which is generated from all the points on contour, we could acquire rotation and scale invariant contour segments. The CFR can describes the local or global feature of contour shape according to the unit length of contour segment. Therefore we describe the shape of objects with the feature vector which represents the distribution of CFRs, and calculate the similarity by comparing the feature vector of corresponding unit length segments. We implemented the proposed method and experimented with rotated and scaled 165 fish images of fifteen types. The experimental result shows that the proposed method is not only invariant to rotation and scale but also superior to NCCH and TRP method in the clustering power.

  • PDF

The Transition Invariant Feature Extraction of the Character using the Spherical Coordinate System (구 좌표계를 이용한 위치 불변 문자 특징 추출)

  • Seo, Choon-Weon
    • 전자공학회논문지 IE
    • /
    • v.46 no.3
    • /
    • pp.19-25
    • /
    • 2009
  • In this paper, I suggested the character recognition methods which are used the centroid method and included the spherical transform from the rectangle coordination for the character recognition system and obtained the results of the above 78.14% average differential ratio for the character features. The character feature extraction system using the spherical transform method is suggested in this paper, and the possibilities of the method which is get the invariant feature for the character transition using the centroid are suggested through the differential ratio results.

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.

Experimental Optimal Choice Of Initial Candidate Inliers Of The Feature Pairs With Well-Ordering Property For The Sample Consensus Method In The Stitching Of Drone-based Aerial Images

  • Shin, Byeong-Chun;Seo, Jeong-Kweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1648-1672
    • /
    • 2020
  • There are several types of image registration in the sense of stitching separated images that overlap each other. One of these is feature-based registration by a common feature descriptor. In this study, we generate a mosaic of images using feature-based registration for drone aerial images. As a feature descriptor, we apply the scale-invariant feature transform descriptor. In order to investigate the authenticity of the feature points and to have the mapping function, we employ the sample consensus method; we consider the sensed image's inherent characteristic such as the geometric congruence between the feature points of the images to propose a novel hypothesis estimation of the mapping function of the stitching via some optimally chosen initial candidate inliers in the sample consensus method. Based on the experimental results, we show the efficiency of the proposed method compared with benchmark methodologies of random sampling consensus method (RANSAC); the well-ordering property defined in the context and the extensive stitching examples have supported the utility. Moreover, the sample consensus scheme proposed in this study is uncomplicated and robust, and some fatal miss stitching by RANSAC is remarkably reduced in the measure of the pixel difference.

Object Recognition by Invariant Feature Extraction in FLIR (적외선 영상에서의 불변 특징 정보를 이용한 목표물 인식)

  • 권재환;이광연;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.65-68
    • /
    • 2000
  • This paper describes an approach for extracting invariant features using a view-based representation and recognizing an object with a high speed search method in FLIR. In this paper, we use a reformulated eigenspace technique based on robust estimation for extracting features which are robust for outlier such as noise and clutter. After extracting feature, we recognize an object using a partial distance search method for calculating Euclidean distance. The experimental results show that the proposed method achieves the improvement of recognition rate compared with standard PCA.

  • PDF

Feature-based Image Analysis for Object Recognition on Satellite Photograph (인공위성 영상의 객체인식을 위한 영상 특징 분석)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2007
  • This paper presents a system for image matching and recognition based on image feature detection and description techniques from artificial satellite photographs. We propose some kind of parameters from the varied environmental elements happen by image handling process. The essential point of this experiment is analyzes that affects match rate and recognition accuracy when to change of state of each parameter. The proposed system is basically inspired by Lowe's SIFT(Scale-Invariant Transform Feature) algorithm. The descriptors extracted from local affine invariant regions are saved into database, which are defined by k-means performed on the 128-dimensional descriptor vectors on an artificial satellite photographs from Google earth. And then, a label is attached to each cluster of the feature database and acts as guidance for an appeared building's information in the scene from camera. This experiment shows the various parameters and compares the affected results by changing parameters for the process of image matching and recognition. Finally, the implementation and the experimental results for several requests are shown.

  • PDF

Rotation-Invariant Texture Classification Using Gabor Wavelet (Gabor 웨이블릿을 이용한 회전 변화에 무관한 질감 분류 기법)

  • Kim, Won-Hee;Yin, Qingbo;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.9
    • /
    • pp.1125-1134
    • /
    • 2007
  • In this paper, we propose a new approach for rotation invariant texture classification based on Gabor wavelet. Conventional methods have the low correct classification rate in large texture database. In our proposed method, we define two feature groups which are the global feature vector and the local feature matrix. The feature groups are output of Gabor wavelet filtering. By using the feature groups, we defined an improved discriminant and obtained high classification rates of large texture database in the experiments. From spectrum symmetry of texture images, the number of test times were reduced nearly 50%. Consequently, the correct classification rate is improved with $2.3%{\sim}15.6%$ values in 112 Brodatz texture class, which may vary according to comparison methods.

  • PDF

Image Character Recognition using the Mellin Transform and BPEJTC (Mellin 변환 방식과 BPEJTC를 이용한 영상 문자 인식)

  • 서춘원;고성원;이병선
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.17 no.4
    • /
    • pp.26-35
    • /
    • 2003
  • For the recognizing system to be classified the same or different images in the nature the rotation, scale and transition invariant features is to be necessary. There are many investigations to get the feature for the recognition system and the log-polar transform which is to be get the invariant feature for the scale and rotation is used. In this paper, we suggested the character recognition methods which are used the centroid method and the log-polar transform with the interpolation to get invariant features for the character recognition system and obtained the results of the above 50% differential ratio for the character features. And we obtained the about 90% recognition ratio from the suggested character recognition system using the BPEJTC which is used the invariant feature from the Mellin transform method for the reference image. and can be recognized the scaled and rotated input character. Therefore, we suggested the image character recognition system using the Mellin transform method and the BPEJTC is possible to recognize with the invariant feature for rotation scale and transition.

Image Watermarking Scheme Based on Scale-Invariant Feature Transform

  • Lyu, Wan-Li;Chang, Chin-Chen;Nguyen, Thai-Son;Lin, Chia-Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3591-3606
    • /
    • 2014
  • In this paper, a robust watermarking scheme is proposed that uses the scale-invariant feature transform (SIFT) algorithm in the discrete wavelet transform (DWT) domain. First, the SIFT feature areas are extracted from the original image. Then, one level DWT is applied on the selected SIFT feature areas. The watermark is embedded by modifying the fractional portion of the horizontal or vertical, high-frequency DWT coefficients. In the watermark extracting phase, the embedded watermark can be directly extracted from the watermarked image without requiring the original cover image. The experimental results showed that the proposed scheme obtains the robustness to both signal processing and geometric attacks. Also, the proposed scheme is superior to some previous schemes in terms of watermark robustness and the visual quality of the watermarked image.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.