• Title/Summary/Keyword: Scale Invariant

Search Result 363, Processing Time 0.021 seconds

Scale Invariant Auto-context for Object Segmentation and Labeling

  • Ji, Hongwei;He, Jiangping;Yang, Xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2881-2894
    • /
    • 2014
  • In complicated environment, context information plays an important role in image segmentation/labeling. The recently proposed auto-context algorithm is one of the effective context-based methods. However, the standard auto-context approach samples the context locations utilizing a fixed radius sequence, which is sensitive to large scale-change of objects. In this paper, we present a scale invariant auto-context (SIAC) algorithm which is an improved version of the auto-context algorithm. In order to achieve scale-invariance, we try to approximate the optimal scale for the image in an iterative way and adopt the corresponding optimal radius sequence for context location sampling, both in training and testing. In each iteration of the proposed SIAC algorithm, we use the current classification map to estimate the image scale, and the corresponding radius sequence is then used for choosing context locations. The algorithm iteratively updates the classification maps, as well as the image scales, until convergence. We demonstrate the SIAC algorithm on several image segmentation/labeling tasks. The results demonstrate improvement over the standard auto-context algorithm when large scale-change of objects exists.

ESTIMATION OF SCALE PARAMETER FROM RAYLEIGH DISTRIBUTION UNDER ENTROPY LOSS

  • Chung, Youn-Shik
    • Journal of applied mathematics & informatics
    • /
    • v.2 no.1
    • /
    • pp.33-40
    • /
    • 1995
  • Entropy loss is derived by the scale parameter of Rayleigh distribution. Under this entropy loss we obtain the best invariant estimators and the Bayes estimators of the scale parameter. Also we compare MLE with the proposed estimators.

Two-Dimensional Shape Description of Objects using The Contour Fluctuation Ratio (윤곽선 변동율을 이용한 물체의 2차원 형태 기술)

  • 김민기
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.2
    • /
    • pp.158-166
    • /
    • 2002
  • In this paper, we proposed a contour shape description method which use the CFR(contour fluctuation ratio) feature. The CFR is the ratio of the line length to the curve length of a contour segment. The line length means the distance of two end points on a contour segment, and the curve length means the sum of distance of all adjacent two points on a contour segment. We should acquire rotation and scale invariant contour segments because each CFR is computed from contour segments. By using the interleaved contour segment of which length is proportion to the entire contour length and which is generated from all the points on contour, we could acquire rotation and scale invariant contour segments. The CFR can describes the local or global feature of contour shape according to the unit length of contour segment. Therefore we describe the shape of objects with the feature vector which represents the distribution of CFRs, and calculate the similarity by comparing the feature vector of corresponding unit length segments. We implemented the proposed method and experimented with rotated and scaled 165 fish images of fifteen types. The experimental result shows that the proposed method is not only invariant to rotation and scale but also superior to NCCH and TRP method in the clustering power.

  • PDF

Automatic Registration of High Resolution Satellite Images using Local Properties of Tie Points (지역적 매칭쌍 특성에 기반한 고해상도영상의 자동기하보정)

  • Han, You-Kyung;Byun, Young-Gi;Choi, Jae-Wan;Han, Dong-Yeob;Kim, -Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.353-359
    • /
    • 2010
  • In this paper, we propose the automatic image-to-image registration of high resolution satellite images using local properties of tie points to improve the registration accuracy. A spatial distance between interest points of reference and sensed images extracted by Scale Invariant Feature Transform(SIFT) is additionally used to extract tie points. Coefficients of affine transform between images are extracted by invariant descriptor based matching, and interest points of sensed image are transformed to the reference coordinate system using these coefficients. The spatial distance between interest points of sensed image which have been transformed to the reference coordinates and interest points of reference image is calculated for secondary matching. The piecewise linear function is applied to the matched tie points for automatic registration of high resolution images. The proposed method can extract spatially well-distributed tie points compared with SIFT based method.

MEGH: A New Affine Invariant Descriptor

  • Dong, Xiaojie;Liu, Erqi;Yang, Jie;Wu, Qiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1690-1704
    • /
    • 2013
  • An affine invariant descriptor is proposed, which is able to well represent the affine covariant regions. Estimating main orientation is still problematic in many existing method, such as SIFT (scale invariant feature transform) and SURF (speeded up robust features). Instead of aligning the estimated main orientation, in this paper ellipse orientation is directly used. According to ellipse orientation, affine covariant regions are firstly divided into 4 sub-regions with equal angles. Since affine covariant regions are divided from the ellipse orientation, the divided sub-regions are rotation invariant regardless the rotation, if any, of ellipse. Meanwhile, the affine covariant regions are normalized into a circular region. In the end, the gradients of pixels in the circular region are calculated and the partition-based descriptor is created by using the gradients. Compared with the existing descriptors including MROGH, SIFT, GLOH, PCA-SIFT and spin images, the proposed descriptor demonstrates superior performance according to extensive experiments.

Comparison of invariant pattern recognition algorithms (불변 패턴인식 알고리즘의 비교연구)

  • 강대성
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.30-41
    • /
    • 1996
  • This paper presents a comparative study of four pattern recognition algorithms which are invariant to translations, rotations, and scale changes of the input object; namely, object shape features (OSF), geometrica fourier mellin transform (GFMT), moment invariants (MI), and centered polar exponential transform (CPET). Pattern description is obviously one of the most important aspects of pattern recognition, which is useful to describe the object shape independently of translation, rotation, or size. We first discuss problems that arise in the conventional invariant pattern recognition algorithms, or size. We first discuss problems that arise in the coventional invariant pattern recognition algorithms, then we analyze their performance using the same criterion. Computer simulations with several distorted images show that the CPET algorithm yields better performance than the other ones.

  • PDF

Best Invariant Estimators In the Scale Parameter Problem

  • Choi, Kuey-Chung
    • Honam Mathematical Journal
    • /
    • v.13 no.1
    • /
    • pp.53-63
    • /
    • 1991
  • In this paper we first present the elements of the theory of families of distributions and corresponding estimators having structual properties which are preserved under certain groups of transformations, called "Invariance Principle". The invariance principle is an intuitively appealing decision principle which is frequently used, even in classical statistics. It is interesting not only in its own right, but also because of its strong relationship with several other proposal approaches to statistics, including the fiducial inference of Fisher [3, 4], the structural inference of Fraser [5], and the use of noninformative priors of Jeffreys [6]. Unfortunately, a space precludes the discussion of fiducial inference and structural inference. Many of the key ideas in these approaches will, however, be brought out in the discussion of invarience and its relationship to the use of noninformatives priors. This principle is also applied to the problem of finding the best scale invariant estimator in the scale parameter problem. Finally, several examples are subsequently given.

  • PDF

Size, Scale and Rotation Invariant Proposed Feature vectors for Trademark Recognition

  • Faisal zafa, Muhammad;Mohamad, Dzulkifli
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1420-1423
    • /
    • 2002
  • The classification and recognition of two-dimensional trademark patterns independently of their position, orientation, size and scale by proposing two feature vectors has been discussed. The paper presents experimentation on two feature vectors showing size- invariance and scale-invariance respectively. Both feature vectors are equally invariant to rotation as well. The feature extraction is based on local as well as global statistics of the image. These feature vectors have appealing mathematical simplicity and are versatile. The results so far have shown the best performance of the developed system based on these unique sets of feature. The goal has been achieved by segmenting the image using connected-component (nearest neighbours) algorithm. Second part of this work considers the possibility of using back propagation neural networks (BPN) for the learning and matching tasks, by simply feeding the feature vectosr. The effectiveness of the proposed feature vectors is tested with various trademarks, not used in learning phase.

  • PDF

Comparative Analysis of the Performance of SIFT and SURF (SIFT 와 SURF 알고리즘의 성능적 비교 분석)

  • Lee, Yong-Hwan;Park, Je-Ho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.3
    • /
    • pp.59-64
    • /
    • 2013
  • Accurate and robust image registration is important task in many applications such as image retrieval and computer vision. To perform the image registration, essential required steps are needed in the process: feature detection, extraction, matching, and reconstruction of image. In the process of these function, feature extraction not only plays a key role, but also have a big effect on its performance. There are two representative algorithms for extracting image features, which are scale invariant feature transform (SIFT) and speeded up robust feature (SURF). In this paper, we present and evaluate two methods, focusing on comparative analysis of the performance. Experiments for accurate and robust feature detection are shown on various environments such like scale changes, rotation and affine transformation. Experimental trials revealed that SURF algorithm exhibited a significant result in both extracting feature points and matching time, compared to SIFT method.