DOI QR코드

DOI QR Code

Gradient Descent Approach for Value-Based Weighting

점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법

  • 이창환 (동국대학교 정보통신학과) ;
  • 배주현 (동국대학교 정보통신학과)
  • Received : 2010.05.18
  • Accepted : 2010.07.30
  • Published : 2010.10.31

Abstract

Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

나이브 베이시안 알고리즘은 데이터 마이닝의 여러 분야에서 적용되고 있으며 좋은 성능을 보여주고 있다. 하지만 이 학습 방법은 모든 속성의 가중치가 동일하다는 가정을 하고 있으며 이러한 가정으로 인하여 가끔 정확도가 떨어지는 현상이 발생한다. 이러한 문제를 보완하기 위하여 나이브 베이시안에서 속성의 가중치를 조절하는 다수의 연구가 제안되어 이러한 단점을 보완하고 있다. 본 연구에서는 나이브 베이시안 학습에서 기존의 속성에 가중치를 부여하는 방식에서 한걸음 나아가 속성의 값에 가중치를 부여하는 새로운 방식을 연구하였다. 이러한 속성값의 가중치를 계산하기 위하여 점진적 하강(gradient descent) 방법을 이용하여 가중치를 계산하는 방식을 제안하였다. 제안된 알고리즘은 다수의 데이터를 이용하여 속성 가중치 방식과 비교하였고 대부분의 경우에 더 좋은 성능을 제공함을 알 수 있었다.

Keywords

References

  1. Claire Cardie and Nicholas Howe. Improving minority class prediction using case-specific feature weights. In Proceedings of the Fourteenth International Conference on Machine Learning, pp.57-65, 1997.
  2. Peter Clark and Robin Boswell. Rule induction with CN2: some recent improvements. In EWSL-91: Proceedings of the European working session on learning on Machine learning, pp.151-163, 1991.
  3. Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, NY, USA, 1991.
  4. Pedro Domingos and Michael Pazzani. On the optimality of the simple bayesian classifier under zero-one loss. Machine Learning, 29(2-3), 1997.
  5. U. Fayyad and K. Irani. Multi-interval discretization of continuous-valued attributes for classification learning. Proceedings of Thirteenth International Joint Conference on Artificial Intelligence. Morgan Kaufmann, 1993.
  6. Thomas Gartner and Peter A. Flach. Wbcsvm: Weighted bayesian classification based on support vector machines, Proceedings of the Eighteenth International Conference on Machine Learning, 2001.
  7. Mark Hall. A decision tree-based attribute weighting filter for naive bayes. Knowledge-Based Systems, 20(2), 2007. 13
  8. P. Henrici. Two remarks of the kantorovich inequality. American Mathematical Monthly, 68:904-906,1961. https://doi.org/10.2307/2311698
  9. Pat Langley and Stephanie Sage. Induction of selective bayesian classifiers. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pages 399-406, 1994.
  10. Ron Kohavi and George H. John. Wrappers for feature subset selection. Artificial Intelligence, 97(1-2):273-324, 1997. https://doi.org/10.1016/S0004-3702(97)00043-X
  11. S. Kullback and R. A. Leibler. On information and suciency. The Annals of Mathematical Statistics, 22(1):79-86, 1951. https://doi.org/10.1214/aoms/1177729694
  12. C. Merz, P. Murphy, and D. Aha. UCI repository of machine learning databases. 1997.
  13. J. Ross Quinlan. C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1993.
  14. C. A. Ratanamahatana and D. Gunopulos. Feature selection for the naive bayesian classier using decision trees. Applied Artificial Intelligence, 17(5-6):475-487, 2003. https://doi.org/10.1080/713827175
  15. Dietrich Wettschereck, David W. Aha, and Takao Mohri. A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artificial Intelligence Review, 11:273, 1997. https://doi.org/10.1023/A:1006593614256
  16. Harry Zhang and Shengli Sheng. Learning weighted naive bayes with accurate ranking. In ICDM '04: Proceedings of the Fourth IEEE International Conference on Data Mining, 2004.