• Title/Summary/Keyword: Multilayer Perceptrons

Search Result 65, Processing Time 0.027 seconds

Increasing Output Nodes for Performance Improvement of Multilayer Perceptrons (다층퍼셉트론의 성능향상을 위한 출력노드 수 증가)

  • Oh, Sang-Hoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.13-15
    • /
    • 2006
  • When we use multilayer perceptron model for pattern classification probmems, we allocate one output node for each class. In this paper, we increase the number of output nodes for each class and investigate the performance of multilayer perceptrons through the simulation of isolated-word recognition problems.

  • PDF

On the Noise Robustness of Multilayer Perceptrons (다층퍼셉트론의 잡음 강건성)

  • 오상훈
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2003.11a
    • /
    • pp.213-217
    • /
    • 2003
  • In this paper, we analysize the noise robustness of MLPs(Multilayer perceptrons). Also, as a preprocessing stage of MLPs to improve noise robustness, we consider the ICA(independent component analysis) and PCA(principle component analysis). After analyzing the noise redunction effect using PCA or ICA, we verify the noise robustness of MLPs through handwritten-digit recognition simulations.

  • PDF

Performance Improvement of Multilayer Perceptrons with Increased Output Nodes (다층퍼셉트론의 출력 노드 수 증가에 의한 성능 향상)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.123-130
    • /
    • 2009
  • When we apply MLPs(multilayer perceptrons) to pattern classification problems, we generally allocate one output node for each class and the index of output node denotes a class. On the contrary, in this paper, we propose to increase the number of output nodes per each class for performance improvement of MLPs. For theoretical backgrounds, we derive the misclassification probability in two class problems with additional outputs under the assumption that the two classes have equal probability and outputs are uniformly distributed in each class. Also, simulations of 50 isolated-word recognition show the effectiveness of our method.

Optimal Learning Rates in Gradient Descent Training of Multilayer Perceptrons (다층퍼셉트론의 강하 학습을 위한 최적 학습률)

  • 오상훈
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.99-105
    • /
    • 2004
  • This paper proposes optimal learning rates in the gradient descent training of multilayer perceptrons, which are a separate learning rate for weights associated with each neuron and a separate one for assigning virtual hidden targets associated with each training pattern Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

An Analysis of Noise Robustness for Multilayer Perceptrons and Its Improvements (다층퍼셉트론의 잡음 강건성 분석 및 향상 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.159-166
    • /
    • 2009
  • In this paper, we analyse the noise robustness of MLPs(Multilayer perceptrons) through deriving the probability density function(p.d.f.) of output nodes with additive input noises and the misclassification ratio with the integral form of the p.d.f. functions. Also, we propose linear preprocessing methods to improve the noise robustness. As a preprocessing stage of MLPs, we consider ICA(independent component analysis) and PCA(principle component analysis). After analyzing the noise reduction effect using PCA or ICA in the viewpoints of SNR(Singal-to-Noise Ratio), we verify the preprocessing effects through the simulations of handwritten-digit recognition problems.

Local-step Optimization in Online Update Learning of Multilayer Perceptrons (다충신경망을 위한 온라인방식 학습의 개별학습단계 최적화 방법)

  • Tae-Seung, Lee;Ho-Jin, Choi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.700-702
    • /
    • 2004
  • A local-step optimization method is proposed to supplement the global-step optimization methods which adopt online update mode of internal weights and error energy as stop criterion in learning of multilayer perceptrons (MLPs). This optimization method is applied to the standard online error backpropagation(EBP) and the performance is evaluated for a speaker verification system.

  • PDF

Classification of Imbalanced Data Using Multilayer Perceptrons (다층퍼셉트론에 의한 불균현 데이터의 학습 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.141-148
    • /
    • 2009
  • Recently there have been many research efforts focused on imbalanced data classification problems, since they are pervasive but hard to be solved. Approaches to the imbalanced data problems can be categorized into data level approach using re-sampling, algorithmic level one using cost functions, and ensembles of basic classifiers for performance improvement. As an algorithmic level approach, this paper proposes to use multilayer perceptrons with higher-order error functions. The error functions intensify the training of minority class patterns and weaken the training of majority class patterns. Mammography and thyroid data-sets are used to verify the superiority of the proposed method over the other methods such as mean-squared error, two-phase, and threshold moving methods.

A New Hidden Error Function for Training of Multilayer Perceptrons (다층 퍼셉트론의 층별 학습 가속을 위한 중간층 오차 함수)

  • Oh Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.57-64
    • /
    • 2005
  • LBL(Layer-By-Layer) algorithms have been proposed to accelerate the training speed of MLPs(Multilayer Perceptrons). In this LBL algorithms, each layer needs a error function for optimization. Especially, error function for hidden layer has a great effect to achieve good performance. In this sense, this paper proposes a new hidden layer error function for improving the performance of LBL algorithm for MLPs. The hidden layer error function is derived from the mean squared error of output layer. Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

Design of Multilayer Perceptrons for Pattern Classifications (패턴인식 문제에 대한 다층퍼셉트론의 설계 방법)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.99-106
    • /
    • 2010
  • Multilayer perceptrons(MLPs) or feed-forward neural networks are widely applied to many areas based on their function approximation capabilities. When implementing MLPs for application problems, we should determine various parameters and training methods. In this paper, we discuss the design of MLPs especially for pattern classification problems. This discussion includes how to decide the number of nodes in each layer, how to initialize the weights of MLPs, how to train MLPs among various error functions, the imbalanced data problems, and deep architecture.

A Method of Determining the Scale Parameter for Robust Supervised Multilayer Perceptrons

  • Park, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.601-608
    • /
    • 2007
  • Lee, et al. (1999) proposed a unique but universal robust objective function replacing the square objective function for the radial basis function network, and demonstrated some advantages. In this article, the robust objective function in Lee, et al. (1999) is adapted for a multilayer perceptron (MLP). The shape of the robust objective function is formed by the scale parameter. Another method of determining a proper value of that parameter is proposed.