• Title/Summary/Keyword: Incremental Learning Method

Search Result 71, Processing Time 0.033 seconds

Incremental Support Vector Learning Method for Function Approximation (함수 근사를 위한 점증적 서포트 벡터 학습 방법)

  • 임채환;박주영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.135-138
    • /
    • 2002
  • This paper addresses incremental learning method for regression. SVM(support vector machine) is a recently proposed learning method. In general training a support vector machine requires solving a QP (quadratic programing) problem. For very large dataset or incremental dataset, solving QP problems may be inconvenient. So this paper presents an incremental support vector learning method for function approximation problems.

  • PDF

An Learning Algorithm to find the Optimized Network Structure in an Incremental Model (점증적 모델에서 최적의 네트워크 구조를 구하기 위한 학습 알고리즘)

  • Lee Jong-Chan;Cho Sang-Yeop
    • Journal of Internet Computing and Services
    • /
    • v.4 no.5
    • /
    • pp.69-76
    • /
    • 2003
  • In this paper we show a new learning algorithm for pattern classification. This algorithm considered a scheme to find a solution to a problem of incremental learning algorithm when the structure becomes too complex by noise patterns included in learning data set. Our approach for this problem uses a pruning method which terminates the learning process with a predefined criterion. In this process, an iterative model with 3 layer feedforward structure is derived from the incremental model by an appropriate manipulations. Notice that this network structure is not full-connected between upper and lower layers. To verify the effectiveness of pruning method, this network is retrained by EBP. From this results, we can find out that the proposed algorithm is effective, as an aspect of a system performence and the node number included in network structure.

  • PDF

Selective Incremental Learning for Face Tracking Using Staggered Multi-Scale LBP (얼굴 추적에서의 Staggered Multi-Scale LBP를 사용한 선택적인 점진 학습)

  • Lee, Yonggeol;Choi, Sang-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.115-123
    • /
    • 2015
  • The incremental learning method performs well in face face tracking. However, it has a drawback in that it is sensitive to the tracking error in the previous frame due to the environmental changes. In this paper, we propose a selective incremental learning method to track a face more reliably under various conditions. The proposed method is robust to illumination variation by using the LBP(Local Binary Pattern) features for each individual frame. We select patches to be used in incremental learning by using Staggered Multi-Scale LBP, which prevents the propagation of tracking errors occurred in the previous frame. The experimental results show that the proposed method improves the face tracking performance on the videos with environmental changes such as illumination variation.

Stepwise Constructive Method for Neural Networks Using a Flexible Incremental Algorithm (Flexible Incremental 알고리즘을 이용한 신경망의 단계적 구축 방법)

  • Park, Jin-Il;Jung, Ji-Suk;Cho, Young-Im;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.574-579
    • /
    • 2009
  • There have been much difficulties to construct an optimized neural network in complex nonlinear regression problems such as selecting the networks structure and avoiding overtraining problem generated by noise. In this paper, we propose a stepwise constructive method for neural networks using a flexible incremental algorithm. When the hidden nodes are added, the flexible incremental algorithm adaptively controls the number of hidden nodes by a validation dataset for minimizing the prediction residual error. Here, the ELM (Extreme Learning Machine) was used for fast training. The proposed neural network can be an universal approximator without user intervene in the training process, but also it has faster training and smaller number of hidden nodes. From the experimental results with various benchmark datasets, the proposed method shows better performance for real-world regression problems than previous methods.

Fault-tolerant control system for once-through steam generator based on reinforcement learning algorithm

  • Li, Cheng;Yu, Ren;Yu, Wenmin;Wang, Tianshu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.9
    • /
    • pp.3283-3292
    • /
    • 2022
  • Based on the Deep Q-Network(DQN) algorithm of reinforcement learning, an active fault-tolerance method with incremental action is proposed for the control system with sensor faults of the once-through steam generator(OTSG). In this paper, we first establish the OTSG model as the interaction environment for the agent of reinforcement learning. The reinforcement learning agent chooses an action according to the system state obtained by the pressure sensor, the incremental action can gradually approach the optimal strategy for the current fault, and then the agent updates the network by different rewards obtained in the interaction process. In this way, we can transform the active fault tolerant control process of the OTSG to the reinforcement learning agent's decision-making process. The comparison experiments compared with the traditional reinforcement learning algorithm(RL) with fixed strategies show that the active fault-tolerant controller designed in this paper can accurately and rapidly control under sensor faults so that the pressure of the OTSG can be stabilized near the set-point value, and the OTSG can run normally and stably.

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

Efficient Incremental Learning using the Preordered Training Data (미리 순서가 매겨진 학습 데이타를 이용한 효과적인 증가학습)

  • Lee, Sun-Young;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.97-107
    • /
    • 2000
  • Incremental learning generally reduces training time and increases the generalization of a neural network by selecting training data incrementally during the training. However, the existing methods of incremental learning repeatedly evaluate the importance of training data every time they select additional data. In this paper, an incremental learning algorithm is proposed for pattern classification problems. It evaluates the importance of each piece of data only once before starting the training. The importance of the data depends on how close they are to the decision boundary. The current paper presents an algorithm which orders the data according to their distance to the decision boundary by using clustering. Experimental results of two artificial and real world classification problems show that this proposed incremental learning method significantly reduces the size of the training set without decreasing generalization performance.

  • PDF

Design and Implementation of Incremental Learning Technology for Big Data Mining

  • Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.15 no.3
    • /
    • pp.32-38
    • /
    • 2019
  • We usually suffer from difficulties in treating or managing Big Data generated from various digital media and/or sensors using traditional mining techniques. Additionally, there are many problems relative to the lack of memory and the burden of the learning curve, etc. in an increasing capacity of large volumes of text when new data are continuously accumulated because we ineffectively analyze total data including data previously analyzed and collected. In this paper, we propose a general-purpose classifier and its structure to solve these problems. We depart from the current feature-reduction methods and introduce a new scheme that only adopts changed elements when new features are partially accumulated in this free-style learning environment. The incremental learning module built from a gradually progressive formation learns only changed parts of data without any re-processing of current accumulations while traditional methods re-learn total data for every adding or changing of data. Additionally, users can freely merge new data with previous data throughout the resource management procedure whenever re-learning is needed. At the end of this paper, we confirm a good performance of this method in data processing based on the Big Data environment throughout an analysis because of its learning efficiency. Also, comparing this algorithm with those of NB and SVM, we can achieve an accuracy of approximately 95% in all three models. We expect that our method will be a viable substitute for high performance and accuracy relative to large computing systems for Big Data analysis using a PC cluster environment.

Face Recognition System with SVDD-based Incremental Learning Scheme (SVDD기반의 점진적 학습기능을 갖는 얼굴인식 시스템)

  • Kang, Woo-Sung;Na, Jin-Hee;Ahn, Ho-Seok;Choi, Jin-Young
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.66-72
    • /
    • 2006
  • In face recognition, learning speed of face is very important since the system should be trained again whenever the size of dataset increases. In existing methods, training time increases rapidly with the increase of data, which leads to the difficulty of training with a large dataset. To overcome this problem, we propose SVDD (Support Vector Domain Description)-based learning method that can learn a dataset of face rapidly and incrementally. In experimental results, we show that the training speed of the proposed method is much faster than those of other methods. Moreover, it is shown that our face recognition system can improve the accuracy gradually by learning faces incrementally at real environments with illumination changes.

  • PDF

INCREMENTAL INDUCTIVE LEARNING ALGORITHM IN THE FRAMEWORK OF ROUGH SET THEORY AND ITS APPLICATION

  • Bang, Won-Chul;Bien, Zeung-Nam
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.308-313
    • /
    • 1998
  • In this paper we will discuss a type of inductive learning called learning from examples, whose task is to induce general description of concepts from specific instances of these concepts. In many real life situations, however, new instances can be added to the set of instances. It is first proposed within the framework of rough set theory, for such cases, an algorithm to find minimal set of rules for decision tables without recalculation for overcall set of instances. The method of learning presented here is base don a rough set concept proposed by Pawlak[2][11]. It is shown an algorithm to find minimal set of rules using reduct change theorems giving criteria for minimum recalculation with an illustrative example. Finally, the proposed learning algorithm is applied to fuzzy system to learn sampled I/O data.

  • PDF