최근점 이웃망에의한 참조벡터 학습

Learning Reference Vectors by the Nearest Neighbor Network

  • 발행 : 1994.07.01

초록

The nearest neighbor classification rule is widely used because it is not only simple but the error rate is asymptotically less than twice Bayes theoretical minimum error. But the method basically use the whole training patterns as the reference vectors. so that both storage and classification time increase as the number of training patterns increases. LVQ(Learning Vector Quantization) resolved this problem by training the reference vectors instead of just storing the whole training patterns. But it is a heuristic algorithm which has no theoretic background there is no terminating condition and it requires a lot of iterations to get to meaningful result. This paper is to propose a new training method of the reference vectors. which minimize the given error function. The nearest neighbor network,the network version of the nearest neighbor classification rule is proposed. The network is funtionally identical to the nearest neighbor classification rule is proposed. The network is funtionally identical to the nearest neighbor classification rule and the reference vectors are represented by the weights between the nodes. The network is trained to minimize the error function with respect to the weights by the steepest descent method. The learning algorithm is derived and it is shown that the proposed method can adjust more reference vectors than LVQ in each iteration. Experiment showed that the proposed method requires less iterations and the error rate is smaller than that of LVQ2.

키워드