• Title/Summary/Keyword: Incremental Data Learning

Search Result 71, Processing Time 0.024 seconds

Data selection method for Incremental learning using prior evaluation of data importance (데이터 중요도의 사전 평가를 이용한 증가학습을 위한 데이터 선택 방법)

  • 이선영;조성준;방승양
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.339-341
    • /
    • 1998
  • 다층 퍼셉트론 학습은 학습 데이터의 능동적인 선택 여부에 따라 능동적 학습(Active learning)과 피동적 학습(Passive learning)으로 구분할 수 있다. 기존의 능동적 학습 방법들은 학습 데이터의 중요도를 측정할 수 있는 기준(measure)을 제시하고 이 기준에 따라 학습 데이터를 선택하는 방법을 취하고 있다. 이 방법들은 학습 데이터 선택을 위해 Hessian Approximation과 같은 복잡한 계산이나 학습 데이터를 선택하는 과정에 있어서 데이터의 중요도를 평가하기 위한 반복적인 계산을 필요로 한다. 본 논문에서는 학습 데이터 선택 시 반복적인 계산이 필요하지 않는 비교사 학습을 이용한 능동적 학습 데이터 선택 방법을 제안하고 그 수렴 특성과 일반화 성능을 분석한다. 또한 비교 실험을 통하여 제안된 방법이 기존의 능동적 학습방법보다 간단한 계산만으로 수렴 속도를 향상시키며 일반화에도 뒤떨어지지 않음을 보인다.

  • PDF

An Incremental Method Using Sample Split Points for Global Discretization (전역적 범주화를 위한 샘플 분할 포인트를 이용한 점진적 기법)

  • 한경식;이수원
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.849-858
    • /
    • 2004
  • Most of supervised teaming algorithms could be applied after that continuous variables are transformed to categorical ones at the preprocessing stage in order to avoid the difficulty of processing continuous variables. This preprocessing stage is called global discretization, uses the class distribution list called bins. But, when data are large and the range of the variable to be discretized is very large, many sorting and merging should be performed to produce a single bin because most of global discretization methods need a single bin. Also, if new data are added, they have to perform discretization from scratch to construct categories influenced by the data because the existing methods perform discretization in batch mode. This paper proposes a method that extracts sample points and performs discretization from these sample points in order to solve these problems. Because the approach in this paper does not require merging for producing a single bin, it is efficient when large data are needed to be discretized. In this study, an experiment using real and synthetic datasets was made to compare the proposed method with an existing one.

Evaluation Method of College English Education Effect Based on Improved Decision Tree Algorithm

  • Dou, Fang
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.500-509
    • /
    • 2022
  • With the rapid development of educational informatization, teaching methods become diversified characteristics, but a large number of information data restrict the evaluation on teaching subject and object in terms of the effect of English education. Therefore, this study adopts the concept of incremental learning and eigenvalue interval algorithm to improve the weighted decision tree, and builds an English education effect evaluation model based on association rules. According to the results, the average accuracy of information classification of the improved decision tree algorithm is 96.18%, the classification error rate can be as low as 0.02%, and the anti-fitting performance is good. The classification error rate between the improved decision tree algorithm and the original decision tree does not exceed 1%. The proposed educational evaluation method can effectively provide early warning of academic situation analysis, and improve the teachers' professional skills in an accelerated manner and perfect the education system.

Incremental Generation of A Decision Tree Using Global Discretization For Large Data (대용량 데이터를 위한 전역적 범주화를 이용한 결정 트리의 순차적 생성)

  • Han, Kyong-Sik;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.487-498
    • /
    • 2005
  • Recently, It has focused on decision tree algorithm that can handle large dataset. However, because most of these algorithms for large datasets process data in a batch mode, if new data is added, they have to rebuild the tree from scratch. h more efficient approach to reducing the cost problem of rebuilding is an approach that builds a tree incrementally. Representative algorithms for incremental tree construction methods are BOAT and ITI and most of these algorithms use a local discretization method to handle the numeric data type. However, because a discretization requires sorted numeric data in situation of processing large data sets, a global discretization method that sorts all data only once is more suitable than a local discretization method that sorts in every node. This paper proposes an incremental tree construction method that efficiently rebuilds a tree using a global discretization method to handle the numeric data type. When new data is added, new categories influenced by the data should be recreated, and then the tree structure should be changed in accordance with category changes. This paper proposes a method that extracts sample points and performs discretiration from these sample points to recreate categories efficiently and uses confidence intervals and a tree restructuring method to adjust tree structure to category changes. In this study, an experiment using people database was made to compare the proposed method with the existing one that uses a local discretization.

A Selective Induction Framework for Improving Prediction in Financial Markets

  • Kim, Sung Kun
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.3
    • /
    • pp.1-18
    • /
    • 2015
  • Financial markets are characterized by large numbers of complex and interacting factors which are ill-understood and frequently difficult to measure. Mathematical models developed in finance are precise formulations of theories of how these factors interact to produce the market value of financial asset. While these models are quite good at predicting these market values, because these forces and their interactions are not precisely understood, the model value nevertheless deviates to some extent from the observable market value. In this paper we propose a framework for augmenting the predictive capabilities of mathematical model with a learning component which is primed with an initial set of historical data and then adjusts its behavior after the event of prediction.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • The Journal of Information Technology and Database
    • /
    • v.8 no.1
    • /
    • pp.95-104
    • /
    • 2001
  • In the text classification domain, labeling the training documents is an expensive process because it requires human expertise and is a tedious, time-consuming task. Therefore, it is important to reduce the manual labeling of training documents while improving the text classifier. Selective sampling, a form of active learning, reduces the number of training documents that needs to be labeled by examining the unlabeled documents and selecting the most informative ones for manual labeling. We apply this methodology to Naive Bayes, a text classifier renowned as a successful method in text classification. One of the most important issues in selective sampling is to determine the criterion when selecting the training documents from the large pool of unlabeled documents. In this paper, we propose two measures that would determine this criterion : the Mean Absolute Deviation (MAD) and the entropy measure. The experimental results, using Renters 21578 corpus, show that this proposed learning method improves Naive Bayes text classifier more than the existing ones.

  • PDF

Biological Early Warning System for Toxicity Detection (독성 감지를 위한 생물 조기 경보 시스템)

  • Kim, Sung-Yong;Kwon, Ki-Yong;Lee, Won-Don
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1979-1986
    • /
    • 2010
  • Biological early warning system detects toxicity by looking at behavior of organisms in water. The system uses classifier for judgement about existence and amount of toxicity in water. Boosting algorithm is one of possible application method for improving performance in a classifier. Boosting repetitively change training example set by focusing on difficult examples in basic classifier. As a result, prediction performance is improved for the events which are difficult to classify, but the information contained in the events which can be easily classified are discarded. In this paper, an incremental learning method to overcome this shortcoming is proposed by using the extended data expression. In this algorithm, decision tree classifier define class distribution information using the weight parameter in the extended data expression by exploiting the necessary information not only from the well classified, but also from the weakly classified events. Experimental results show that the new algorithm outperforms the former Learn++ method without using the weight parameter.

Design of Digit Recognition System Realized with the Aid of Fuzzy RBFNNs and Incremental-PCA (퍼지 RBFNNs와 증분형 주성분 분석법으로 실현된 숫자 인식 시스템의 설계)

  • Kim, Bong-Youn;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.56-63
    • /
    • 2016
  • In this study, we introduce a design of Fuzzy RBFNNs-based digit recognition system using the incremental-PCA in order to recognize the handwritten digits. The Principal Component Analysis (PCA) is a widely-adopted dimensional reduction algorithm, but it needs high computing overhead for feature extraction in case of using high dimensional images or a large amount of training data. To alleviate such problem, the incremental-PCA is proposed for the computationally efficient processing as well as the incremental learning of high dimensional data in the feature extraction stage. The architecture of Fuzzy Radial Basis Function Neural Networks (RBFNN) consists of three functional modules such as condition, conclusion, and inference part. In the condition part, the input space is partitioned with the use of fuzzy clustering realized by means of the Fuzzy C-Means (FCM) algorithm. Also, it is used instead of gaussian function to consider the characteristic of input data. In the conclusion part, connection weights are used as the extended diverse types in polynomial expression such as constant, linear, quadratic and modified quadratic. Experimental results conducted on the benchmarking MNIST handwritten digit database demonstrate the effectiveness and efficiency of the proposed digit recognition system when compared with other studies.

Fault Detection Algorithm of Charge-discharge System of Hybrid Electric Vehicle Using SVDD (SVDD기법을 이용한 하이브리드 전기자동차 충-방전시스템의 고장검출 알고리듬)

  • Na, Sang-Gun;Yang, In-Beom;Heo, Hoon
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.21 no.11
    • /
    • pp.997-1004
    • /
    • 2011
  • A fault detection algorithm of a charge and discharge system to ensure the safe use of hybrid electric vehicle is proposed in this paper. This algorithm can be used as a complementary way to existing fault detection technique for a charge and discharge system. The proposed algorithm uses a SVDD technique, which additionally utilizes two methods for learning a large amount of data; one is to incrementally learn a large amount of data, the other one is to remove the data that does not affect the next learning using a new data reduction technique. Removal of data is selected by using lines connecting support vectors. In the proposed method, the data processing speed is drastically improved and the storage space used is remarkably reduced than the conventional methods using the SVDD technique only. A battery data and speed data of a commercial hybrid electrical vehicle are utilized in this study. A fault boundary is produced via SVDD techniques using the input and output in normal operation of the system without using mathematical modeling. A fault detection simulation is performed using both an artificial fault data and the obtained fault boundary via SVDD techniques. In the fault detection simulation, fault detection time via proposed algorithm is compared with that of the peak-peak method. Also the proposed algorithm is revealed to detect fault in the region where conventional peak-peak method is never able to do.

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.