• Title/Summary/Keyword: Incremental Learning

Search Result 142, Processing Time 0.024 seconds

The Speaker Identification Using Incremental Learning (Incremental Learning을 이용한 화자 인식)

  • Sim, Kwee-Bo;Heo, Kwang-Seung;Park, Chang-Hyun;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.5
    • /
    • pp.576-581
    • /
    • 2003
  • Speech signal has the features of speakers. In this paper, we propose the speaker identification system which use the incremental learning based on neural network. Recorded speech signal through the Mic is passed the end detection and is divided voiced signal and unvoiced signal. The extracted 12 order cpestrum are used the input data for neural network. Incremental learning is the learning algorithm that the learned weights are remembered and only the new weights, that is created as adding new speaker, are trained. The architecture of neural network is extended with the number of speakers. So, this system can learn without the restricted number of speakers.

Incremental Support Vector Learning Method for Function Approximation (함수 근사를 위한 점증적 서포트 벡터 학습 방법)

  • 임채환;박주영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.135-138
    • /
    • 2002
  • This paper addresses incremental learning method for regression. SVM(support vector machine) is a recently proposed learning method. In general training a support vector machine requires solving a QP (quadratic programing) problem. For very large dataset or incremental dataset, solving QP problems may be inconvenient. So this paper presents an incremental support vector learning method for function approximation problems.

  • PDF

Speaker Identification Based on Incremental Learning Neural Network

  • Heo, Kwang-Seung;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.76-82
    • /
    • 2005
  • Speech signal has various features of speakers. This feature is extracted from speech signal processing. The speaker is identified by the speaker identification system. In this paper, we propose the speaker identification system that uses the incremental learning based on neural network. Recorded speech signal through the microphone is blocked to the frame of 1024 speech samples. Energy is divided speech signal to voiced signal and unvoiced signal. The extracted 12 orders LPC cpestrum coefficients are used with input data for neural network. The speakers are identified with the speaker identification system using the neural network. The neural network has the structure of MLP which consists of 12 input nodes, 8 hidden nodes, and 4 output nodes. The number of output node means the identified speakers. The first output node is excited to the first speaker. Incremental learning begins when the new speaker is identified. Incremental learning is the learning algorithm that already learned weights are remembered and only the new weights that are created as adding new speaker are trained. It is learning algorithm that overcomes the fault of neural network. The neural network repeats the learning when the new speaker is entered to it. The architecture of neural network is extended with the number of speakers. Therefore, this system can learn without the restricted number of speakers.

A New Incremental Learning Algorithm with Probabilistic Weights Using Extended Data Expression

  • Yang, Kwangmo;Kolesnikova, Anastasiya;Lee, Won Don
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.258-267
    • /
    • 2013
  • New incremental learning algorithm using extended data expression, based on probabilistic compounding, is presented in this paper. Incremental learning algorithm generates an ensemble of weak classifiers and compounds these classifiers to a strong classifier, using a weighted majority voting, to improve classification performance. We introduce new probabilistic weighted majority voting founded on extended data expression. In this case class distribution of the output is used to compound classifiers. UChoo, a decision tree classifier for extended data expression, is used as a base classifier, as it allows obtaining extended output expression that defines class distribution of the output. Extended data expression and UChoo classifier are powerful techniques in classification and rule refinement problem. In this paper extended data expression is applied to obtain probabilistic results with probabilistic majority voting. To show performance advantages, new algorithm is compared with Learn++, an incremental ensemble-based algorithm.

Efficient Incremental Learning using the Preordered Training Data (미리 순서가 매겨진 학습 데이타를 이용한 효과적인 증가학습)

  • Lee, Sun-Young;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.97-107
    • /
    • 2000
  • Incremental learning generally reduces training time and increases the generalization of a neural network by selecting training data incrementally during the training. However, the existing methods of incremental learning repeatedly evaluate the importance of training data every time they select additional data. In this paper, an incremental learning algorithm is proposed for pattern classification problems. It evaluates the importance of each piece of data only once before starting the training. The importance of the data depends on how close they are to the decision boundary. The current paper presents an algorithm which orders the data according to their distance to the decision boundary by using clustering. Experimental results of two artificial and real world classification problems show that this proposed incremental learning method significantly reduces the size of the training set without decreasing generalization performance.

  • PDF

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

Speaker Identification using Incremental Neural Network and LPCC (Incremental Neural Network 과 LPCC을 이용한 화자인식)

  • 허광승;박창현;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.341-344
    • /
    • 2002
  • 음성은 화자들의 특징을 가지고 있다. 이 논문에서는 신경망에 기초한 Incremental Learning을 이용하여 화자인식시스템을 소개한다. 컴퓨터를 통하여 녹음된 문장들은 FFT를 거치면서 Frequency 영역으로 바뀌고, 모음들의 특징을 가지고 있는 Formant를 이용하여 모음들을 추출한다. 추출된 모음들은 LPC처리를 통하여 화자의 특성을 가지고 있는 Coefficient값들을 얻는다. LPCC과정과 Vector Quantization을 통해 10개의 특징 점들은 학습을 위한 Input으로 들어가고 화자 수에 따라 증가되는 Hidden Layer와 Output Layer들을 가지고 있는 신경망을 통해 화자인식을 수행한다.

The Relationship Between Learning Orientation and Incremental Innovation, and the Moderating Effect of Tenure (학습지향성이 점진적 혁신에 미치는 효과 및 재직기간의 조절효과)

  • Ahn, Kwan-Young
    • Journal of the Korea Safety Management & Science
    • /
    • v.12 no.3
    • /
    • pp.249-255
    • /
    • 2010
  • This paper studies the relationship between learning orientation and incremental innovation(process innovation, operational innovation, and service innovation), and the moderating effect of tenure in tele-communication service sector. Based on the responses from 241 employees, the results of multiple regression analysis show that learning orientation have positive relationships with process innovation, operational innovation, and service innovation. The results of moderating analysis showed that longer tenure employees have more positive relationships with all incremental innovation factors(process innovation, operational innovation, and service innovation) than short tenure employees.

Fault-tolerant control system for once-through steam generator based on reinforcement learning algorithm

  • Li, Cheng;Yu, Ren;Yu, Wenmin;Wang, Tianshu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.9
    • /
    • pp.3283-3292
    • /
    • 2022
  • Based on the Deep Q-Network(DQN) algorithm of reinforcement learning, an active fault-tolerance method with incremental action is proposed for the control system with sensor faults of the once-through steam generator(OTSG). In this paper, we first establish the OTSG model as the interaction environment for the agent of reinforcement learning. The reinforcement learning agent chooses an action according to the system state obtained by the pressure sensor, the incremental action can gradually approach the optimal strategy for the current fault, and then the agent updates the network by different rewards obtained in the interaction process. In this way, we can transform the active fault tolerant control process of the OTSG to the reinforcement learning agent's decision-making process. The comparison experiments compared with the traditional reinforcement learning algorithm(RL) with fixed strategies show that the active fault-tolerant controller designed in this paper can accurately and rapidly control under sensor faults so that the pressure of the OTSG can be stabilized near the set-point value, and the OTSG can run normally and stably.

An Learning Algorithm to find the Optimized Network Structure in an Incremental Model (점증적 모델에서 최적의 네트워크 구조를 구하기 위한 학습 알고리즘)

  • Lee Jong-Chan;Cho Sang-Yeop
    • Journal of Internet Computing and Services
    • /
    • v.4 no.5
    • /
    • pp.69-76
    • /
    • 2003
  • In this paper we show a new learning algorithm for pattern classification. This algorithm considered a scheme to find a solution to a problem of incremental learning algorithm when the structure becomes too complex by noise patterns included in learning data set. Our approach for this problem uses a pruning method which terminates the learning process with a predefined criterion. In this process, an iterative model with 3 layer feedforward structure is derived from the incremental model by an appropriate manipulations. Notice that this network structure is not full-connected between upper and lower layers. To verify the effectiveness of pruning method, this network is retrained by EBP. From this results, we can find out that the proposed algorithm is effective, as an aspect of a system performence and the node number included in network structure.

  • PDF