• Title/Summary/Keyword: Lloyd 알고리즘

Search Result 14, Processing Time 0.018 seconds

On-line Vector Quantizer Design Using Simulated Annealing Method (Simulated Annealing 방법을 이용한 온라인 벡터 양자화기 설계)

  • Song, Geun-Bae;Lee, Haeng-Se
    • The KIPS Transactions:PartB
    • /
    • v.8B no.4
    • /
    • pp.343-350
    • /
    • 2001
  • 백터 양자화기 설계는 다차원의 목적함수를 최소화하는 학습 알고리즘을 필요로 한다. 일반화된 Lloyd 방법(GLA)은 벡터 양자화기 설계를 위해 오늘날 가장 널리 사용되는 알고리즘이다. GLA 는 일괄처리(batch) 방식으로 코드북을 생성하며 목적함수를 단조 감소시키는 강하법(descent algorithm)의 일종이다. 한편 Kohonen 학습법(KLA)은 학습벡터가 입력되는 동안 코드북이 갱신되는 온라인 벡터 양자화기 설계 알고리즘 이다. KLA는 원래 신경망 학습을 위해 Kohonen에 의해 제안되었다. KLA 역시 GLA와 마찬가지로 강하법의 일종이라 할 수 있다. 따라서 이들 두 알고리즘은, 비록 사용하기 편리하고 안정적으로 동작을 하지만, 극소(local minimum) 점으로 수렴하는 문제를 안고 있다. 우리는 이 문제와 관련하여 simulated annealing(SA) 방법의 응용을 논하고자 한다. SA는 현재까지 극소에 빠지지 않고 최소(global minimum)로 수렴하면서, 해의 수렴이 (통계적으로) 보장되는 유일한 방법이라 할 수 있다. 우리는 먼저 GLA에 SA를 응용한 그 동안의 연구를 개괄한다. 다음으로 온라인 방식의 벡터 양자화가 설계에 SA 방법을 응용함으로써 SA 방법에 기초한 새로운 온라인 학습 알고리즘을 제안한다. 우리는 이 알고리즘을 OLVQ-SA 알고리즘이라 부르기로 한다. 가우스-마코프 소스와 음성데이터에 대한 벡터양자화 실험 결과 제안된 방법이 KLA 보다 일관되게 우수한 코드북을 생성함을 보인다.

  • PDF

Fast K-Means Clustering Algorithm using Prediction Data (예측 데이터를 이용한 빠른 K-Means 알고리즘)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Lee, Yill-Byung
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.106-114
    • /
    • 2009
  • In this paper we proposed a fast method for a K-Means Clustering algorithm. The main characteristic of this method is that it uses precalculated data which possibility of change is high in order to speed up the algorithm. When calculating distance to cluster centre at each stage to assign nearest prototype in the clustering algorithm, it could reduce overall computation time by selecting only those data with possibility of change in cluster is high. Calculation time is reduced by using the distance information produced by K-Means algorithm when computing expected input data whose cluster may change, and by using such distance information the algorithm could be less affected by the number of dimensions. The proposed method was compared with original K-Means method - Lloyd's and the improved method KMHybrid. We show that our proposed method significantly outperforms in computation speed than Lloyd's and KMHybrid when using large size data which has large amount of data, great many dimensions and large number of clusters.

Estimation of A New Initial Parameter for the Lloyd-Max Algorithm (로이드-맥스 알고리즘을 위한 새로운 초기 파라메타의 추정)

  • Eon Kyeong Joo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.7
    • /
    • pp.26-32
    • /
    • 1994
  • The Lloyd-Max algorithm is an iterative scheme for design of the minimum mean square error quantizer. It is very simple in concept and easy to program into a computer. However its convergence and accuracy are primarily dependent upon the accuracy of the initial parameter. In this paper, a new initial parameter which converges to a specific value when the number of output levels becomes large is selected. And an estimator using curve fitting techique is suggested. In addition, performance of the proposed method is shown to be superior to that of the existing methods in accuracy and convergence.

  • PDF

Decombined Distributed Parallel VQ Codebook Generation Based on MapReduce (맵리듀스를 사용한 디컴바인드 분산 VQ 코드북 생성 방법)

  • Lee, Hyunjin
    • Journal of Digital Contents Society
    • /
    • v.15 no.3
    • /
    • pp.365-371
    • /
    • 2014
  • In the era of big data, algorithms for the existing IT environment cannot accept on a distributed architecture such as hadoop. Thus, new distributed algorithms which apply a distributed framework such as MapReduce are needed. Lloyd's algorithm commonly used for vector quantization is developed using MapReduce recently. In this paper, we proposed a decombined distributed VQ codebook generation algorithm based on a distributed VQ codebook generation algorithm using MapReduce to get a result more fast. The result of applying the proposed algorithm to big data showed higher performance than the conventional method.

A Performance Improvement of GLCM Based on Nonuniform Quantization Method (비균일 양자화 기법에 기반을 둔 GLCM의 성능개선)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.2
    • /
    • pp.133-138
    • /
    • 2015
  • This paper presents a performance improvement of gray level co-occurrence matrix(GLCM) based on the nonuniform quantization, which is generally used to analyze the texture of images. The nonuniform quantization is given by Lloyd algorithm of recursive technique by minimizing the mean square error. The nonlinear intensity levels by performing nonuniformly the quantization of image have been used to decrease the dimension of GLCM, that is applied to reduce the computation loads as a results of generating the GLCM and calculating the texture parameters by using GLCM. The proposed method has been applied to 30 images of $120{\times}120$ pixels with 256-gray level for analyzing the texture by calculating the 6 parameters, such as angular second moment, contrast, variance, entropy, correlation, inverse difference moment. The experimental results show that the proposed method has a superior computation time and memory to the conventional 256-level GLCM method without performing the quantization. Especially, 16-gray level by using the nonuniform quantization has the superior performance for analyzing textures to another levels of 48, 32, 12, and 8 levels.

Improved Spectral-reflectance(SR) Estimation Using Set of Principle Components Separately Organized for Each SR Population with Similar SRs (유사 분광반사율 모집단별로 구성된 주성분 집합을 이용한 개선된 분광반사율 추정)

  • 권오설;이철희;이호근;하영호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.2
    • /
    • pp.11-19
    • /
    • 2003
  • This paper proposes an algorithm to reduce the estimation error of surface spectral-reflectance(SR) using a conventional 3-band RGB camera. In the proposed method, estimation error can be reduced by using adaptive principal components(PCs) for each color region. In order to build adaptive set of PCs, n SR populations are organized for n PC sets by using Lloyd quantizer design algorithm. Macbetch ColorCheckcer is utilized as initial representative SR values for 1485 Munsell color chips of total color population and the Munsell chips arc divided subsets and a set of corresponding adaptive PCs per each subset is organized. As a result of experiments, the proposed method showed advanced estimation performance compared to both the two 3-band PCA methods and the 5-band wiener method.

Soft-Decision Based Quantization of the Multimedia Signal Considering the Outliers in Rate-Allocation and Distortion (이상 비트율 할당과 신호왜곡 문제점을 고려한 멀티미디어 신호의 연판정 양자화 방법)

  • Lim, Jong-Wook;Noh, Myung-Hoon;Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.286-293
    • /
    • 2010
  • There are two major conventional quantization algorithms: resolution-constrained quantization (RCQ) and entropy-constrained quantization (ECQ). Although RCQ works well for fixed transmission-rate, it produces the distortion outliers since the cell sizes are different. Compared with RCQ, ECQ has the constraints on the cell size but it produces the rate outliers. We propose the cell-size constrained vector quantization (CCVQ) that improves the generalized Lloyd algorithm (GLA). The CCVQ algorithm is able to make a soft-decision between RCQ and ECQ by using the flexible penalty measure according to the cell size. Although the proposed method increases the small amount of overall mean-distortion, it can reduce the distortion outliers.

Signatures Verification by Using Nonlinear Quantization Histogram Based on Polar Coordinate of Multidimensional Adjacent Pixel Intensity Difference (다차원 인접화소 간 명암차의 극좌표 기반 비선형 양자화 히스토그램에 의한 서명인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.375-382
    • /
    • 2016
  • In this paper, we presents a signatures verification by using the nonlinear quantization histogram of polar coordinate based on multi-dimensional adjacent pixel intensity difference. The multi-dimensional adjacent pixel intensity difference is calculated from an intensity difference between a pair of pixels in a horizontal, vertical, diagonal, and opposite diagonal directions centering around the reference pixel. The polar coordinate is converted from the rectangular coordinate by making a pair of horizontal and vertical difference, and diagonal and opposite diagonal difference, respectively. The nonlinear quantization histogram is also calculated from nonuniformly quantizing the polar coordinate value by using the Lloyd algorithm, which is the recursive method. The polar coordinate histogram of 4-directional intensity difference is applied not only for more considering the corelation between pixels but also for reducing the calculation load by decreasing the number of histogram. The nonlinear quantization is also applied not only to still more reflect an attribute of intensity variations between pixels but also to obtain the low level histogram. The proposed method has been applied to verified 90(3 persons * 30 signatures/person) images of 256*256 pixels based on a matching measures of city-block, Euclidean, ordinal value, and normalized cross-correlation coefficient. The experimental results show that the proposed method has a superior to the linear quantization histogram, and Euclidean distance is also the optimal matching measure.

Sample-Adaptive Product Quantization and Design Algorithm (표본 적응 프러덕트 양자화와 설계 알고리즘)

  • 김동식;박섭형
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2391-2400
    • /
    • 1999
  • Vector quantizer (VQ) is an efficient data compression technique for low bit rate applications. However, the major disadvantage of VQ is its encoding complexity which increases dramatically as the vector dimension and bit rate increase. Even though one can use a modified VQ to reduce the encoding complexity, it is nearly impossible to implement such a VQ at a high bit rate or for a large vector dimension because of the enormously large memory requirement for the codebook and the very large training sequence (TS) size. To overcome this difficulty, in this paper we propose a novel structurally constrained VQ for the high bit rate and the large vector dimension cases in order to obtain VQ-level performance. Furthermore, this VQ can be extended to the low bit rate applications. The proposed quantization scheme has a form of feed-forward adaptive quantizer with a short adaptation period. Hence, we call this quantization scheme sample-adaptive product quantizer (SAPQ). SAPQ can provide a 2 ~3dB improvement over the Lloyd-Max scalar quantizers.

  • PDF

Vector Quantization Using Cascaded Cauchy/Kohonen training (Cauchy/Kohonen 순차 결합 학습법을 사용한 벡터양자화)

  • Song, Geun-Bae;Han, Man-Geun;Lee, Haeng-Se
    • The KIPS Transactions:PartB
    • /
    • v.8B no.3
    • /
    • pp.237-242
    • /
    • 2001
  • 고전적인 GLA 알고리즘과 마찬가지로 Kohonen 학습법은 경도 강하법으로 오차함수의 해에 접근해 나간다. 따라서 KLA의 이러한 문제를 극복하기 위해 모의 담금질법의 일종인 Cauchy 학습법을 응용을 제안한다. 그러나 이 방법은 학습시간이 느리다고 하는 단점이 있다. 본 논문 이 점을 개선시키기 위해 Cauchy 학습법과 Kohonen 학습법을 순차 결합시킨 또 다른 학습법을 제안한다. 그 결과 코시 학습법과 마찬가지로 국부최적 문제를 극복하면서도 삭습시간을 단축할 수 있었다.

  • PDF