• Title/Summary/Keyword: rotation-invariant

Search Result 256, Processing Time 0.026 seconds

Image Feature Extraction Using Energy field Analysis (에너지장 해석을 통한 영상 특징량 추출 방법 개발)

  • 김면희;이태영;이상룡
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.404-406
    • /
    • 2002
  • In this paper, the method of image feature extraction is proposed. This method employ the energy field analysis, outlier removal algorithm and ring projection. Using this algorithm, we achieve rotation-translation-scale invariant feature extraction. The force field are exploited to automatically locate the extrema of a small number of potential energy wells and associated potential channels. The image feature is acquired from relationship of local extrema using the ring projection method.

  • PDF

Rotation-invariant pattern recognition using an optical wavelet circular harmonic matched filter (광웨이브렛 원형고조 정합필터를 이용한 회전불변 패턴인식)

  • 이하운;김철수;김정우;김수중
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.1
    • /
    • pp.132-144
    • /
    • 1997
  • The rotation-invariant pattern recognition filter using circular harmonic function of the wavelet transforme dsreference image by morlet, mexican-hat, and haar wavelt function is proposed. The rotated reference images, the images sililar to the reference image, and the images which are added by random noise are used for the inpt images, and in case of the input images with random noise, they are applied to the recognition after removing the random noise by the transformed moving average method with proper thresholding value and window size. The proposed optical wavelet circular harmonic matched filter (WCHMF) is a type of the matche dfilter, so that it can be applied to the 4f vander lugt optical correlation system. SNR and discrimination capability of the proposed filter are compared with those of the conventional HF, the POCHF, and the BPOCHF. The proper wavelet function for the reference image used in this paper is achieved by applying morlet, mexican-hat, and harr wavelet function ot the proposed filter, and the proposed filter has good SNR and discrimination capability with rotation-invariance in case of the morlet wavelet function.

  • PDF

Pruning and Matching Scheme for Rotation Invariant Leaf Image Retrieval

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.6
    • /
    • pp.280-298
    • /
    • 2008
  • For efficient content-based image retrieval, diverse visual features such as color, texture, and shape have been widely used. In the case of leaf images, further improvement can be achieved based on the following observations. Most plants have unique shape of leaves that consist of one or more blades. Hence, blade-based matching can be more efficient than whole shape-based matching since the number and shape of blades are very effective to filtering out dissimilar leaves. Guaranteeing rotational invariance is critical for matching accuracy. In this paper, we propose a new shape representation, indexing and matching scheme for leaf image retrieval. For leaf shape representation, we generated a distance curve that is a sequence of distances between the leaf’s center and all the contour points. For matching, we developed a blade-based matching algorithm called rotation invariant - partial dynamic time warping (RI-PDTW). To speed up the matching, we suggest two additional techniques: i) priority queue-based pruning of unnecessary blade sequences for rotational invariance, and ii) lower bound-based pruning of unnecessary partial dynamic time warping (PDTW) calculations. We implemented a prototype system on the GEMINI framework [1][2]. Using experimental results, we showed that our scheme achieves excellent performance compared to competitive schemes.

A Novel Fuzzy Neural Network and Learning Algorithm for Invariant Handwritten Character Recognition (변형에 무관한 필기체 문자 인식을 위한 퍼지 신경망과 학습 알고리즘)

  • Yu, Jeong-Su
    • Journal of The Korean Association of Information Education
    • /
    • v.1 no.1
    • /
    • pp.28-37
    • /
    • 1997
  • This paper presents a new neural network based on fuzzy set and its application to invariant character recognition. The fuzzy neural network consists of five layers. The results of simulation show that the network can recognize characters in the case of distortion, translation, rotation and different sizes of handwritten characters and even with noise(8${\sim}$30%)). Translation, distortion, different sizes and noise are achieved by layer L2 and rotation invariant by layer L5. The network can recognize 108 examples of training with 100% recognition rate when they are shifted in eight directions by 1 pixel and 2 pixels. Also, the network can recognize all the distorted characters with 100% recognition rate. The simulations show that the test patterns cover a ${\pm}20^{\circ}$ range of rotation correctly. The proposed network can also recall correctly all the learned characters with 100% recognition rate. The proposed network is simple and its learning and recall speeds are very fast. This network also works for the segmentation and recognition of handwritten characters.

  • PDF

Panoramic Image Composition Algorithm through Scaling and Rotation Invariant Features (크기 및 회전 불변 특징점을 이용한 파노라마 영상 합성 알고리즘)

  • Kwon, Ki-Won;Lee, Hae-Yeoun;Oh, Duk-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.333-344
    • /
    • 2010
  • This paper addresses the way to compose paronamic images from images taken the same objects. With the spread of digital camera, the panoramic image has been studied to generate with its interest. In this paper, we propose a panoramic image generation method using scaling and rotation invariant features. First, feature points are extracted from input images and matched with a RANSAC algorithm. Then, after the perspective model is estimated, the input image is registered with this model. Since the SURF feature extraction algorithm is adapted, the proposed method is robust against geometric distortions such as scaling and rotation. Also, the improvement of computational cost is achieved. In the experiment, the SURF feature in the proposed method is compared with features from Harris corner detector or the SIFT algorithm. The proposed method is tested by generating panoramic images using $640{\times}480$ images. Results show that it takes 0.4 second in average for computation and is more efficient than other schemes.

Geometric Transform-Invariant Gait Recognition Using Modified Radon Transform (변형된 라돈 변환을 이용한 기하학적 형태 불변 보행인식)

  • Jang, Sang-Sik;Lee, Seung-Won;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.67-75
    • /
    • 2011
  • This paper presents a scale and rotation-invariant gait recognition method using R-transform, which is computed by projecting squared coefficients of Radon transform. Since R-transform is invariant to translation, rotation, and scaling, it particularly suitable for extracting object poses without camera calibration. Coefficients of R-transform are used to compute correlation, and the maximum correlation value determines the similarity between two gait images. The proposed method requires neither camera calibration nor geometric compensation, and as a result, it makes robust gait recognition possible without additional compensation for translation, rotation, and scaling.

Shape Description and Recognition Using the Relative Distance-Curvature Feature Space (상대거리-곡률 특징 공간을 이용한 형태 기술 및 인식)

  • Kim Min-Ki
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.527-534
    • /
    • 2005
  • Rotation and scale variations make it difficult to solve the problem of shape description and recognition because these variations change the location of points composing the shape. However, some geometric Invariant points and the relations among them are not changed by these variations. Therefore, if points in image space depicted with the r-y coordinates system can be transformed into a new coordinates system that are invariant to rotation and scale, the problem of shape description and recognition becomes easier. This paper presents a shape description method via transformation from the image space into the invariant feature space having two axes: representing relative distance from a centroid and contour segment curvature(CSC). The relative distance describes how far a point departs from the centroid, and the CSC represents the degree of fluctuation in a contour segment. After transformation, mesh features were used to describe the shape mapped onto the feature space. Experimental results show that the proposed method is robust to rotation and scale variations.

Rotation-Invariant Texture Classification Using Gabor Wavelet (Gabor 웨이블릿을 이용한 회전 변화에 무관한 질감 분류 기법)

  • Kim, Won-Hee;Yin, Qingbo;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.9
    • /
    • pp.1125-1134
    • /
    • 2007
  • In this paper, we propose a new approach for rotation invariant texture classification based on Gabor wavelet. Conventional methods have the low correct classification rate in large texture database. In our proposed method, we define two feature groups which are the global feature vector and the local feature matrix. The feature groups are output of Gabor wavelet filtering. By using the feature groups, we defined an improved discriminant and obtained high classification rates of large texture database in the experiments. From spectrum symmetry of texture images, the number of test times were reduced nearly 50%. Consequently, the correct classification rate is improved with $2.3%{\sim}15.6%$ values in 112 Brodatz texture class, which may vary according to comparison methods.

  • PDF

Geometrically Invariant Image Watermarking Using Connected Objects and Gravity Centers

  • Wang, Hongxia;Yin, Bangxu;Zhou, Linna
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2893-2912
    • /
    • 2013
  • The design of geometrically invariant watermarking is one of the most challenging work in digital image watermarking research area. To achieve the robustness to geometrical attacks, the inherent characteristic of an image is usually used. In this paper, a geometrically invariant image watermarking scheme using connected objects and gravity center is proposed. First, the gray-scale image is converted into the binary one, and the connected objects according to the connectedness of binary image are obtained, then the coordinates of these connected objects are mapped to the gray-scale image, and the gravity centers of those bigger objects are chosen as the feature points for watermark embedding. After that, the line between each gravity center and the center of the whole image is rotated an angle to form a sector, and finally the same version of watermark is embedded into these sectors. Because the image connectedness is topologically invariant to geometrical attacks such as scaling and rotation, and the gravity center of the connected object as feature points is very stable, the watermark synchronization is realized successfully under the geometrical distortion. The proposed scheme can extract the watermark information without using the original image or template. The simulation results show the proposed scheme has a good invisibility for watermarking application, and stronger robustness than previous feature-based watermarking schemes against geometrical attacks such as rotation, scaling and cropping, and can also resist common image processing operations including JPEG compression, adding noise, median filtering, and histogram equalization, etc.

Object Recogniton for Markerless Augmented Reality Embodiment (마커 없는 증강 현실 구현을 위한 물체인식)

  • Paul, Anjan Kumar;Lee, Hyung-Jin;Kim, Young-Bum;Islam, Mohammad Khairul;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.1
    • /
    • pp.126-133
    • /
    • 2009
  • In this paper, we propose an object recognition technique for implementing marker less augmented reality. Scale Invariant Feature Transform (SIFT) is used for finding the local features from object images. These features are invariant to scale, rotation, translation, and partially invariant to illumination changes. Extracted Features are distinct and have matched with different image features in the scene. If the trained image is properly matched, then it is expected to find object in scene. In this paper, an object is found from a scene by matching the template images that can be generated from the first frame of the scene. Experimental results of object recognition for 4 kinds of objects showed that the proposed technique has a good performance.

  • PDF