• Title/Summary/Keyword: Multi-layer perceptrons

Search Result 34, Processing Time 0.031 seconds

Efficient weight initialization method in multi-layer perceptrons

  • Han, Jaemin;Sung, Shijoong;Hyun, Changho
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1995.09a
    • /
    • pp.325-333
    • /
    • 1995
  • Back-propagation is the most widely used algorithm for supervised learning in multi-layer feed-forward networks. However, back-propagation is very slow in convergence. In this paper, a new weight initialization method, called rough map initialization, in multi-layer perceptrons is proposed. To overcome the long convergence time, possibly due to the random initialization of the weights of the existing multi-layer perceptrons, the rough map initialization method initialize weights by utilizing relationship of input-output features with singular value decomposition technique. The results of this initialization procedure are compared to random initialization procedure in encoder problems and xor problems.

  • PDF

A New Hidden Error Function for Layer-By-Layer Training of Multi layer Perceptrons (다층 퍼셉트론의 층별 학습을 위한 중간층 오차 함수)

  • Oh Sang-Hoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.364-370
    • /
    • 2005
  • LBL(Layer-By-Layer) algorithms have been proposed to accelerate the training speed of MLPs(Multilayer Perceptrons). In this LBL algorithms, each layer needs a error function for optimization. Especially, error function for hidden layer has a great effect to achieve good performance. In this sense, this paper proposes a new hidden layer error function for improving the performance of LBL algorithm for MLPs. The hidden layer error function is derived from the mean squared error of output layer. Effectiveness of the proposed error function was demonstrated for a handwritten digit recognition and an isolated-word recognition tasks and very fast learning convergence was obtained.

  • PDF

Learning of multi-layer perceptrons with 8-bit data precision (8비트 데이타 정밀도를 가지는 다층퍼셉트론의 역전파 학습 알고리즘)

  • 오상훈;송윤선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.209-216
    • /
    • 1996
  • In this paper, we propose a learning method of multi-layer perceptrons (MLPs) with 8-bit data precision. The suggested method uses the cross-entropy cost function to remove the slope term of error signal in output layer. To decrease the possibility of overflows, we use 16-bit weighted sum results into the 8-bit data with appropriate range. In the forwared propagation, the range for bit-conversion is determined using the saturation property of sigmoid function. In the backwared propagation, the range for bit-conversion is derived using the probability density function of back-propagated signal. In a simulation study to classify hadwritten digits in the CEDAR database, our method shows similar generalization performance to the error back-propagation learning with 16-bit precision.

  • PDF

Improving the Error Back-Propagation Algorithm of Multi-Layer Perceptrons with a Modified Error Function (역전파 학습의 오차함수 개선에 의한 다층퍼셉트론의 학습성능 향상)

  • 오상훈;이영직
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.922-931
    • /
    • 1995
  • In this paper, we propose a modified error function to improve the EBP(Error Back-Propagation) algorithm of Multi-Layer Perceptrons. Using the modified error function, the output node of MLP generates a strong error signal in the case that the output node is far from the desired value, and generates a weak error signal in the opposite case. This accelerates the learning speed of EBP algorothm in the initial stage and prevents overspecialization for training patterns in the final stage. The effectiveness of our modification is verified through the simulation of handwritten digit recognition.

  • PDF

Hierarchical Architecture of Multilayer Perceptrons for Performance Improvement (다층퍼셉트론의 계층적 구조를 통한 성능향상)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.166-174
    • /
    • 2010
  • Based on the theoretical results that multi-layer feedforward neural networks with enough hidden nodes are universal approximators, we usually use three-layer MLP's(multi-layer perceptrons) consisted of input, hidden, and output layers for many application problems. However, this conventional three-layer architecture of MLP shows poor generalization performance in some applications, which are complex with various features in an input vector. For the performance improvement, this paper proposes a hierarchical architecture of MLP especially when each part of inputs has a special information. That is, one input vector is divided into sub-vectors and each sub-vector is presented to a separate MLP. These lower-level MLPs are connected to a higher-level MLP, which has a role to do a final decision. The proposed method is verified through the simulation of protein disorder prediction problem.

A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons (다층퍼셉트론의 정합 근사화에 의한 2차원 영상의 카메라 오차보정)

  • 이문규;이정화
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.487-493
    • /
    • 1998
  • Camera calibration is the process of determining the coordinate relationship between a camera image and its real world space. Accurate calibration of a camera is necessary for the applications that involve quantitative measurement of camera images. However, if the camera plane is parallel or near parallel to the calibration board on which 2 dimensional objects are defined(this is called "ill-conditioned"), existing solution procedures are not well applied. In this paper, we propose a neural network-based approach to camera calibration for 2D images formed by a mono-camera or a pair of cameras. Multi-layer perceptrons are developed to transform the coordinates of each image point to the world coordinates. The validity of the approach is tested with data points which cover the whole 2D space concerned. Experimental results for both mono-camera and stereo-camera cases indicate that the proposed approach is comparable to Tsai's method[8]. Especially for the stereo camera case, the approach works better than the Tsai's method as the angle between the camera optical axis and the Z-axis increases. Therefore, we believe the approach could be an alternative solution procedure for the ill -conditioned camera calibration.libration.

  • PDF

Multi-layer Neural Network with Hybrid Learning Rules for Improved Robust Capability (Robustness를 형성시키기 위한 Hybrid 학습법칙을 갖는 다층구조 신경회로망)

  • 정동규;이수영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.211-218
    • /
    • 1994
  • In this paper we develope a hybrid learning rule to improve the robustness of multi-layer Perceptions. In most neural networks the activation of a neuron is deternined by a nonlinear transformation of the weighted sum of inputs to the neurons. Investigating the behaviour of activations of hidden layer neurons a new learning algorithm is developed for improved robustness for multi-layer Perceptrons. Unlike other methods which reduce the network complexity by putting restrictions on synaptic weights our method based on error-backpropagation increases the complexity of the underlying proplem by imposing it saturation requirement on hidden layer neurons. We also found that the additional gradient-descent term for the requirement corresponds to the Hebbian rule and our algorithm incorporates the Hebbian learning rule into the error back-propagation rule. Computer simulation demonstrates fast learning convergence as well as improved robustness for classification and hetero-association of patterns.

  • PDF

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Comparative Analysis on Error Back Propagation Learning and Layer By Layer Learning in Multi Layer Perceptrons (다층퍼셉트론의 오류역전파 학습과 계층별 학습의 비교 분석)

  • 곽영태
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.1044-1051
    • /
    • 2003
  • This paper surveys the EBP(Error Back Propagation) learning, the Cross Entropy function and the LBL(Layer By Layer) learning, which are used for learning the MLP(Multi Layer Perceptrons). We compare the merits and demerits of each learning method in the handwritten digit recognition. Although the speed of EBP learning is slower than other learning methods in the initial learning process, its generalization capability is better. Also, the speed of Cross Entropy function that makes up for the weak points of EBP learning is faster than that of EBP learning. But its generalization capability is worse because the error signal of the output layer trains the target vector linearly. The speed of LBL learning is the fastest speed among the other learning methods in the initial learning process. However, it can't train for more after a certain time, it has the lowest generalization capability. Therefore, this paper proposes the standard of selecting the learning method when we apply the MLP.

Optimization of Design Parameters of a Linear Induction Motor for the propulsion of Metro (신경회로망을 이용한 경전철 차량추진용 선형유도전동기의 설계변수 최적화)

  • Im, Dal-Ho;Park, Seung-Chan;Lee, Il-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1995.11a
    • /
    • pp.55-58
    • /
    • 1995
  • An optimum design method of electric machines using neural network is presented. In this method, two multi - layer perceptrons of analysis and design neural network are used in optimizing process. A preliminary model of linear induction motor for subway is designed by the electric and magnetic loading distribution method and then optimized by presented method.

  • PDF