• Title/Summary/Keyword: K2-learning algorithm

Search Result 544, Processing Time 0.031 seconds

Dropout Genetic Algorithm Analysis for Deep Learning Generalization Error Minimization

  • Park, Jae-Gyun;Choi, Eun-Soo;Kang, Min-Soo;Jung, Yong-Gyu
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.2
    • /
    • pp.74-81
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA(Dropout Genetic Algorithm) which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.

A New Learning Algorithm of Neuro-Fuzzy Modeling Using Self-Constructed Clustering

  • Ryu, Jeong-Woong;Song, Chang-Kyu;Kim, Sung-Suk;Kim, Sung-Soo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.2
    • /
    • pp.95-101
    • /
    • 2005
  • In this paper, we proposed a learning algorithm for the neuro-fuzzy modeling using a learning rule to adapt clustering. The proposed algorithm includes the data partition, assigning the rule into the process of partition, and optimizing the parameters using predetermined threshold value in self-constructing algorithm. In order to improve the clustering, the learning method of neuro-fuzzy model is extended and the learning scheme has been modified such that the learning of overall model is extended based on the error-derivative learning. The effect of the proposed method is presented using simulation compare with previous ones.

A Case Study on Machine Learning Applications and Performance Improvement in Learning Algorithm (기계학습 응용 및 학습 알고리즘 성능 개선방안 사례연구)

  • Lee, Hohyun;Chung, Seung-Hyun;Choi, Eun-Jung
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.245-258
    • /
    • 2016
  • This paper aims to present the way to bring about significant results through performance improvement of learning algorithm in the research applying to machine learning. Research papers showing the results from machine learning methods were collected as data for this case study. In addition, suitable machine learning methods for each field were selected and suggested in this paper. As a result, SVM for engineering, decision-making tree algorithm for medical science, and SVM for other fields showed their efficiency in terms of their frequent use cases and classification/prediction. By analyzing cases of machine learning application, general characterization of application plans is drawn. Machine learning application has three steps: (1) data collection; (2) data learning through algorithm; and (3) significance test on algorithm. Performance is improved in each step by combining algorithm. Ways of performance improvement are classified as multiple machine learning structure modeling, $+{\alpha}$ machine learning structure modeling, and so forth.

A Study on Implementation of a Real Time Learning Controller for Direct Drive Manipulator (직접 구동형 매니퓰레이터를 위한 학습 제어기의 실시간 구현에 관한 연구)

  • Jeon, Jong-Wook;An, Hyun-Sik;Lim, Mee-Seub;Kim, Kwon-Ho;Kim, Kwang-Bae;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.369-372
    • /
    • 1993
  • In this thesis, we consider an iterative learning controller to control the continuous trajectory of 2 links direct drive robot manipulator and process computer simulation and real-time experiment. To improve control performance, we adapt an iterative learning control algorithm, drive a sufficient condition for convergence from which is drived extended conventional control algorithm and get better performance by extended learning control algorithm than that by conventional algorithm from simulation results. Also, experimental results show that better performance is taken by extended learning algorithm.

  • PDF

Improving Noise Tolerance in Hopfield Networks

  • Kim, Young-Tae;Park, Jeong-Hyun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.8 no.2
    • /
    • pp.111-118
    • /
    • 1997
  • Adding a noise tolerance factor to the Relaxation learning algorithm in Hop-field network improves noise tolerance without effecting storage capacity. The new algorithm is called the Pseudo-Relaxation algorithm, and the convergence of the algorithm has been proved. It is also shown that the noise tolerance factor does not effect learning speed.

  • PDF

Adaptive Fuzzy Neural Control of Unknown Nonlinear Systems Based on Rapid Learning Algorithm

  • Kim, Hye-Ryeong;Kim, Jae-Hun;Kim, Euntai;Park, Mignon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.95-98
    • /
    • 2003
  • In this paper, an adaptive fuzzy neural control of unknown nonlinear systems based on the rapid learning algorithm is proposed for optimal parameterization. We combine the advantages of fuzzy control and neural network techniques to develop an adaptive fuzzy control system for updating nonlinear parameters of controller. The Fuzzy Neural Network(FNN), which is constructed by an equivalent four-layer connectionist network, is able to learn to control a process by updating the membership functions. The free parameters of the AFN controller are adjusted on-line according to the control law and adaptive law for the purpose of controlling the plant track a given trajectory and it's initial values are off-line preprocessing, In order to improve the convergence of the learning process, we propose a rapid learning algorithm which combines the error back-propagation algorithm with Aitken's $\delta$$\^$2/ algorithm. The heart of this approach ls to reduce the computational burden during the FNN learning process and to improve convergence speed. The simulation results for nonlinear plant demonstrate the control effectiveness of the proposed system for optimal parameterization.

  • PDF

INCREMENTAL INDUCTIVE LEARNING ALGORITHM IN THE FRAMEWORK OF ROUGH SET THEORY AND ITS APPLICATION

  • Bang, Won-Chul;Bien, Zeung-Nam
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.308-313
    • /
    • 1998
  • In this paper we will discuss a type of inductive learning called learning from examples, whose task is to induce general description of concepts from specific instances of these concepts. In many real life situations, however, new instances can be added to the set of instances. It is first proposed within the framework of rough set theory, for such cases, an algorithm to find minimal set of rules for decision tables without recalculation for overcall set of instances. The method of learning presented here is base don a rough set concept proposed by Pawlak[2][11]. It is shown an algorithm to find minimal set of rules using reduct change theorems giving criteria for minimum recalculation with an illustrative example. Finally, the proposed learning algorithm is applied to fuzzy system to learn sampled I/O data.

  • PDF

Robustness of 2nd-order Iterative Learning Control for a Class of Discrete-Time Dynamic Systems

  • Kim, Yong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.363-368
    • /
    • 2004
  • In this paper, the robustness property of 2nd-order iterative learning control(ILC) method for a class of linear and nonlinear discrete-time dynamic systems is studied. 2nd-order ILC method has the PD-type learning algorithm based on both time-domain performance and iteration-domain performance. It is proved that the 2nd-order ILC method has robustness in the presence of state disturbances, measurement noise and initial state error. In the absence of state disturbances, measurement noise and initialization error, the convergence of the 2nd-order ILC algorithm is guaranteed. A numerical example is given to show the robustness and convergence property according to the learning parameters.

Cluster Analysis Algorithms Based on the Gradient Descent Procedure of a Fuzzy Objective Function

  • Rhee, Hyun-Sook;Oh, Kyung-Whan
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.191-196
    • /
    • 1997
  • Fuzzy clustering has been playing an important role in solving many problems. Fuzzy c-Means(FCM) algorithm is most frequently used for fuzzy clustering. But some fixed point of FCM algorithm, know as Tucker's counter example, is not a reasonable solution. Moreover, FCM algorithm is impossible to perform the on-line learning since it is basically a batch learning scheme. This paper presents unsupervised learning networks as an attempt to improve shortcomings of the conventional clustering algorithm. This model integrates optimization function of FCM algorithm into unsupervised learning networks. The learning rule of the proposed scheme is a result of formal derivation based on the gradient descent procedure of a fuzzy objective function. Using the result of formal derivation, two algorithms of fuzzy cluster analysis, the batch learning version and on-line learning version, are devised. They are tested on several data sets and compared with FCM. The experimental results show that the proposed algorithms find out the reasonable solution on Tucker's counter example.

  • PDF

A on-line learning algorithm for recurrent neural networks using variational method (변분법을 이용한 재귀신경망의 온라인 학습)

  • Oh, Oh, Won-Geun;Suh, Suh, Byung-Suhl
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.1
    • /
    • pp.21-25
    • /
    • 1996
  • In this paper we suggest a general purpose RNN training algorithm which is derived on the optimal control concepts and variational methods. First, learning is regared as an optimal control problem, then using the variational methods we obtain optimal weights which are given by a two-point boundary-value problem. Finally, the modified gradient descent algorithm is applied to RNN for on-line training. This algorithm is intended to be used on learning complex dynamic mappings between time varing I/O data. It is useful for nonlinear control, identification, and signal processing application of RNN because its storage requirement is not high and on-line learning is possible. Simulation results for a nonlinear plant identification are illustrated.

  • PDF