• 제목/요약/키워드: 가중치적용

Search Result 2,296, Processing Time 0.024 seconds

The design method for a vector codebook using a variable weight and employing an improved splitting method (개선된 미세분할 방법과 가변적인 가중치를 사용한 벡터 부호책 설계 방법)

  • Cho, Che-Hwang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.462-469
    • /
    • 2002
  • While the conventional K-means algorithms use a fixed weight to design a vector codebook for all learning iterations, the proposed method employs a variable weight for learning iterations. The weight value of two or more beyond a convergent region is applied to obtain new codevectors at the initial learning iteration. The number of learning iteration applying a variable weight must be decreased for higher weight value at the initial learning iteration to design a better codebook. To enhance the splitting method that is used to generate an initial codebook, we propose a new method, which reduces the error between a representative vector and the member of training vectors. The method is that the representative vector with maximum squared error is rejected, but the vector with minimum error is splitting, and then we can obtain the better initial codevectors.

A Study on Quantitative Measurement of Metadata Quality for Journal Articles (학술지 기사에 대한 메타데이터 품질의 계량화 방법에 관한 연구)

  • Lee, Yong-Gu;Kim, Byung-Kyu
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.1
    • /
    • pp.309-326
    • /
    • 2011
  • Most metadata quality measurement employ simple techniques by counting error records. This study presents a new quantitative measurement of metadata quality using advanced weighting schemes in order to overcome the limitations of exiting measurement techniques. Entropy, user tasks, and usage statistics were used to calculate the weights. Integrated weights were presented by combining these weights and were applied to actual journal article metadata. Entropy weights were found to reflect the characteristics of the data itself. User tasks presented the required metadata elements to solve user's information need. Integrated weights showed balanced measures without being affected by the influence of error elements, This finding indicates the new method being suitable for quantitative measurement of metadata quality.

Allocation of Water Supplied by Multi-Purpose Dam Using the Estimate of Weighting Factors (가중치산정을 통한 다목적댐 용수의 배분 방안)

  • Yi, Choong-Sung;Choi, Seung-An;Shim, Myung-Pil;Jung, Kwan-Sue
    • Journal of Korea Water Resources Association
    • /
    • v.37 no.8
    • /
    • pp.663-674
    • /
    • 2004
  • In this study, the principle of water allocation is proposed based on efficiency, equity, sustainability. Also weighting factors are estimated with sectoral factors and regional factors. The former represents relative weights among water use and the latter represents physical characteristics of water demand places. The AHP(Analytic Hierarchy Process) is applied to estimate the sectoral factors, and compounded regional-characteristic factors and regional-scale factors, which reflects socioeconomic statistics for the regional factors. By applying these weighting factors, water allocation rules for dam is developed and applied to Andong dam which supplies water to parts of Busan Shi, Daegu Shi and Goryeong Gun in a water-deficit situation. As a result, it is estimated that Water allocation by priorities distributes the entire water shortage to the lowest rank of water sectors or regions, while water allocation by relative weighting factors disperse all the burdens of water shortage to all sectors and regions.

Automatic Text Categorization by using Normalized Term Frequency Weighting (정규화 용어빈도가중치에 의한 자동문서분류)

  • 김수진;김민수;백장선;박혁로
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.510-512
    • /
    • 2003
  • 본 논문에서는 문서의 자동 분류를 위한 용어 빈도 가중치 계산 방법으로 Box-Cox변환기법을 응용한 정규화 용어빈도 가중치를 정의하고, 이를 문서 분류에 적응하였다. 여기서 Box-Cox 변환기법이란 자료를 정규분포화 할 때 적용하는 통계적인 변환방법으로서, 본 논문에서는 이를 응용하여 새로운 용어빈도가중치 계산법을 제안한다. 문서에서 등장한 용어 빈도는 너무 많거나 적게 등장할 경우, 중요도가 떨어지게 되는데, 이는 용어의 중요도가 빈도에 따른 정규분포로 모델링 될 수 있다는 것을 의미한다. 또한 정규화 가중치 계산방법은 기존의 용어빈도 가중치 공식과 비교할 때, 용어마다 계산방법이 달라져, 로그나 루트와 같은 고정된 가중치 방법보다는 좀더 일반적인 방법이라 할 수 있다. 신문기사 8000건을 대상으로 4개의 그룹으로 나누어 실험 한 결과, 정규화 용어빈도가중치 계산방법이 모두 우위의 분류 정확도롤 가져, 본 논문에서 제안한 방법이 타당함을 알 수 있다.

  • PDF

Dynamic Weight Round Robin Scheduling Algorithm with Load (부하를 고려한 동적 가중치 기반 라운드로빈 스케쥴링 알고리즘)

  • Kim, Sung;Kim, Kyong-Hoon;Ryu, Jae-Sang;Nam, Ji-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10b
    • /
    • pp.1295-1298
    • /
    • 2001
  • 멀티미디어 스트리밍 서비스를 제공하는 서버의 동적 부하분산을 위한 동적 가중치 기반 라운드 로빈 스케줄링 알고리즘을 제안한다. 기존의 가중치 기반 라운드로빈 알고리즘은 서버의 처리 용량만을 이용하여 가중치를 부여하므로 요청이 폭주할 경우 동적 부하 불균형을 갖게 된다. 동적 부하 불균형을 해결하기 위해 제안한 동적 가중치 기반 라운드로빈 알고리즘은 서버의 처리 용량뿐만 아니라 서버의 동적 부하를 이용하여 가중치를 부여하므로 동적 부하 불균형에 잘 적응하여 부하를 균형있게 조절한 수 있다. 제안한 알고리즘은 각 서버의 처리용량을 기준으로 가중치를 계산하고 동적으로 변하는 서버의 부하값에 가중치를 적용한다. 그 결과 동적 부하 불균형 문제를 해결했으며, 더 세밀한 부하 조절 기능을 수행할 수 있었다

  • PDF

Minimum Spanning Tree Algorithm for Deletion of Maximum Weight Edge within a Cycle (한 사이클 내에서 최대 가중치 간선을 제거하기 위한 최소 신장트리 알고리즘)

  • Choi, Myeong-Bok;Han, Tae-Yong;Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.35-42
    • /
    • 2014
  • This paper suggests a method that obtains the minimum spanning tree (MST) far more easily and rapidly than the present ones. The suggested algorithm, firstly, simplifies a graph by means of reducing the number of edges of the graph. To achieve this, it applies a method of eliminating the maximum weight edge if the valency of vertices of the graph is equal to or more than 3. As a result of this step, we can obtain the reduced edge population. Next, it applies a method in which the maximum weight edge is eliminated within the cycle. On applying the suggested population minimizing and maximum weight edge deletion algorithms to 9 various graphs, as many as the number of cycles of the graph is executed and MST is easily obtained. It turns out to lessen 66% of the number of cycles and obtain the MST in at least 2 and at most 8 cycles by only deleting the maximum weight edges.

Convergence Properties of a Adaptive Learning Algorithm Employing a Ramp Threshold Function (Ramp 임계 함수를 적용한 적응 학습 알고리즘의 수렴성)

  • 박소희;조제황
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.121-124
    • /
    • 2000
  • 적응 학습 알고리즘으로 가중치를 변화시키는 단층 신경망의 출력부에 Ramp 임계 함수를 적용하여 입력이 zero-mean Gaussian random vector인 경우 가중치의 stationary point를 구하고, 적응 학습 알고리즘을 유도한다.

  • PDF

The Comparison of Estimation Methods for the Missing Rainfall Data with spatio-temporal Variability (시공간적 변동성을 고려한 강우의 결측치 추정 방법의 비교)

  • Kim, Byung-Sik;Noh, Hui-Seong;Kim, Hung-Soo
    • Journal of Wetlands Research
    • /
    • v.13 no.2
    • /
    • pp.189-197
    • /
    • 2011
  • This paper reviewed application of data-driven method, distance-weighted method(IDWM, IEWM, CCWM, ANN), and radar data method estimated of missing raifall data. To evaluate these methods, statistics was compared using radar and station rainfall data from Imjin-river basin. The range of RMSE values calculated for CCWM, ANN was 1.4 to 1.79mm, and the range of RMSE values estimated data used for radar rainfall data was 0.05 to 2.26mm. Spatial characteristics is considered to Radar rainfall data rather than station rainfall data. Result suggest that estimated data used for radar data can impove estimation of missing raifall data.

Latent Semantic Indexing Analysis of K-Means Document Clustering for Changing Index Terms Weighting (색인어 가중치 부여 방법에 따른 K-Means 문서 클러스터링의 LSI 분석)

  • Oh, Hyung-Jin;Go, Ji-Hyun;An, Dong-Un;Park, Soon-Chul
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.735-742
    • /
    • 2003
  • In the information retrieval system, document clustering technique is to provide user convenience and visual effects by rearranging documents according to the specific topics from the retrieved ones. In this paper, we clustered documents using K-Means algorithm and present the effect of index terms weighting scheme on the document clustering. To verify the experiment, we applied Latent Semantic Indexing approach to illustrate the clustering results and analyzed the clustering results in 2-dimensional space. Experimental results showed that in case of applying local weighting, global weighting and normalization factor, the density of clustering is higher than those of similar or same weighting schemes in 2-dimensional space. Especially, the logarithm of local and global weighting is noticeable.

Analysis and Implementation of Speech/Music Classification for 3GPP2 SMV Codec Employing SVM Based on Discriminative Weight Training (SMV코덱의 음성/음악 분류 성능 향상을 위한 최적화된 가중치를 적용한 입력벡터 기반의 SVM 구현)

  • Kim, Sang-Kyun;Chang, Joon-Hyuk;Cho, Ki-Ho;Kim, Nam-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.471-476
    • /
    • 2009
  • In this paper, we apply a discriminative weight training to a support vector machine (SVM) based speech/music classification for the selectable mode vocoder (SMV) of 3GPP2. In our approach, the speech/music decision rule is expressed as the SVM discriminant function by incorporating optimally weighted features of the SMV based on a minimum classification error (MCE) method which is different from the previous work in that different weights are assigned to each the feature of SMV. The performance of the proposed approach is evaluated under various conditions and yields better results compared with the conventional scheme in the SVM.