• Title/Summary/Keyword: 점진적 학습 알고리즘

Search Result 50, Processing Time 0.024 seconds

A New Incremental Instance-Based Learning Using Recursive Partitioning (재귀분할을 이용한 새로운 점진적 인스턴스 기반 학습기법)

  • Han Jin-Chul;Kim Sang-Kwi;Yoon Chung-Hwa
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.127-132
    • /
    • 2006
  • K-NN (k-Nearest Neighbors), which is a well-known instance-based learning algorithm, simply stores entire training patterns in memory, and uses a distance function to classify a test pattern. K-NN is proven to show satisfactory performance, but it is notorious formemory usage and lengthy computation. Various studies have been found in the literature in order to minimize memory usage and computation time, and NGE (Nested Generalized Exemplar) theory is one of them. In this paper, we propose RPA (Recursive Partition Averaging) and IRPA (Incremental RPA) which is an incremental version of RPA. RPA partitions the entire pattern space recursively, and generates representatives from each partition. Also, due to the fact that RPA is prone to produce excessive number of partitions as the number of features in a pattern increases, we present IRPA which reduces the number of representative patterns by processing the training set in an incremental manner. Our proposed methods have been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory.

Efficient Construction and Training Multilayer Perceptrons by Incremental Pattern Selection (점진적 패턴 선택에 의한 다충 퍼셉트론의 효율적 구성 및 학습)

  • Jang, Byeong-Tak
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.429-438
    • /
    • 1996
  • An incremental learning algorithm is presented that constructs a multilayer perceptron whose size is optimal for solving a given problem. Unlike conventional algorithms in which a fixed size training set is processed repeat-edly, the method uses an increasing number of critical examples to find a necessary and sufficient number of hidden units for learning the entire data. Experimental results in hand- writtern digit recognition shows that the network size optimization combined with incremental pattern selection generalizes significantly better and converges faster than conventional methods.

  • PDF

Committee Learning Classifier based on Attribute Value Frequency (속성 값 빈도 기반의 전문가 다수결 분류기)

  • Lee, Chang-Hwan;Jung, In-Chul;Kwon, Young-S.
    • Journal of KIISE:Databases
    • /
    • v.37 no.4
    • /
    • pp.177-184
    • /
    • 2010
  • In these day, many data including sensor, delivery, credit and stock data are generated continuously in massive quantity. It is difficult to learn from these data because they are large in volume and changing fast in their concepts. To handle these problems, learning methods based in sliding window methods over time have been used. But these approaches have a problem of rebuilding models every time new data arrive, which requires a lot of time and cost. Therefore we need very simple incremental learning methods. Bayesian method is an example of these methods but it has a disadvantage which it requries the prior knowledge(probabiltiy) of data. In this study, we propose a learning method based on attribute values. In the proposed method, even though we don't know the prior knowledge(probability) of data, we can apply our new method to data. The main concept of this method is that each attribute value is regarded as an expert learner, summing up the expert learners lead to better results. Experimental results show our learning method learns from data very fast and performs well when compared to current learning methods(decision tree and bayesian).

A New Rule-Generation Algorithm (새로운 규칙 생성 알고리즘)

  • Kim Sang-kwi;Yoon Chung-hwa
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.721-723
    • /
    • 2005
  • 패턴 분류에 많이 사용되는 MBR(Memory Based Reasoning) 기법은 메모리에 저장된 학습패턴과 테스트 패턴간의 거리를 계산하여 가장 가까운 학습패턴의 클래스로 분류하기 때문에 테스트 패턴을 분류하는 기준을 설명할 수 없다는 문제점을 가지고 있다. 본 논문에서는 RPA(Recursive Partition Averaging) 기법을 이용하여 분류 기준을 설명할 수 있는 IF-THIN 형태의 규칙을 생성하고 생성된 규칙의 일반화 성능을 향상시키기 위하여 불필요한 조건을 제거하는 규칙 pruning 알고리즘과 생성되는 규칙의 개수를 줄일 수 있는 점진적 규칙 추출 알고리즘을 제안한다.

  • PDF

Incremental Conceptual Clustering Using Modified Category Utility (변형된 Category Utility를 이용한 점진 개념학습)

  • Kim Pyo Jae;Choi Jin Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.193-197
    • /
    • 2005
  • 점진적 개념 학습 알고리즘인 COBWEB은 클래스 정보가 주어지지 않은 사례들(instances)을 분류하기 위하여 사례의 속성과 값에 근거하여 학습하며 각 노드가 유사한 사례들의 집합인 클래스에 해당하는 분류 트리를 생성하는 알고리즘이다. 유사한 사례들을 같은 클래스로 분류하기 위한 기준으로 category utility가 사용되며 이는 클래스 내부의 유사도와 클래스간의 차이점을 최대화하는 방향으로 클래스를 분류한다 기존의 COBWEB에 사용되는 category utility는 클래스 사이즈와 예측 정확성 사이의 tradeoff 관계로 볼 수 있으며, 이로 인하여 예측 정확성은 약간 감소하나 클래스 사이즈가 커지는 방향으로 학습이 진행 될 수 있는 편향성(bias)를 가지고 있다. 이는 분류 트리에 불필요한 클래스 노드들(spurious nodes)을 생성하게 하여 학습 결과인 클래스 개념을 이해하는뎨 어렵게 한다. 본 논문에서는 클래스와 그에 속하는 사례들의 속성-값 분포를 고려하여 클래스와 속성의 연관성에 비례한 가충치를 더한 변형된 category utility를 제안하고, dataset에 대한 실험을 통하여 제안된 category utility가 기존의 큰 클래스 사이즈를 선호하는 bias를 완화시킴을 보이고자 한다.

  • PDF

The Incremental Learning Method of Variable Slope Backpropagation Algorithm Using Representative Pattern (대표 패턴을 사용한 가변 기울기 역전도 알고리즘의 점진적 학습방법)

  • 심범식;윤충화
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.1
    • /
    • pp.95-112
    • /
    • 1998
  • The Error Backpropagation algorithm is widely used in various areas such as associative memory, speech recognition, pattern recognition, robotics and so on. However, if and when a new leaning pattern has to be added in order to drill, it will have to accomplish a new learning with all previous learning pattern and added pattern from the very beginning. Somehow, it brings about a result which is that the more it increases the number of pattern, the longer it geometrically progress the time required by leaning. Therefore, a so-called Incremental Learning Method has to be solved the point at issue all by means in case of situation which is periodically and additionally learned by numerous data. In this study, not only the existing neural network construction is still remained, but it also suggests a method which means executing through added leaning by a model pattern. Eventually, for a efficiency of suggested technique, both Monk's data and Iris data are applied to make use of benchmark on machine learning field.

  • PDF

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Gradient Estimation for Progressive Photon Mapping (점진적 광자 매핑을 위한 기울기 계산 기법)

  • Donghee Jeon;Jeongmin Gu;Bochang Moon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.141-147
    • /
    • 2024
  • Progressive photon mapping is a widely adopted rendering technique that conducts a kernel-density estimation on photons progressively generated from lights. Its hyperparameter, which controls the reduction rate of the density estimation, highly affects the quality of its rendering image due to the bias-variance tradeoff of pixel estimates in photon-mapped results. We can minimize the errors of rendered pixel estimates in progressive photon mapping by estimating the optimal parameters based on gradient-based optimization techniques. To this end, we derived the gradients of pixel estimates with respect to the parameters when performing progressive photon mapping and compared our estimated gradients with finite differences to verify estimated gradients. The gradient estimated in this paper can be applied in an online learning algorithm that simultaneously performs progressive photon mapping and parameter optimization in future work.

Gradient Descent Approach for Value-Based Weighting (점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법)

  • Lee, Chang-Hwan;Bae, Joo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.381-388
    • /
    • 2010
  • Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.