• Title/Summary/Keyword: Selective Incremental Learning

Search Result 6, Processing Time 0.024 seconds

Selective Incremental Learning for Face Tracking Using Staggered Multi-Scale LBP (얼굴 추적에서의 Staggered Multi-Scale LBP를 사용한 선택적인 점진 학습)

  • Lee, Yonggeol;Choi, Sang-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.115-123
    • /
    • 2015
  • The incremental learning method performs well in face face tracking. However, it has a drawback in that it is sensitive to the tracking error in the previous frame due to the environmental changes. In this paper, we propose a selective incremental learning method to track a face more reliably under various conditions. The proposed method is robust to illumination variation by using the LBP(Local Binary Pattern) features for each individual frame. We select patches to be used in incremental learning by using Staggered Multi-Scale LBP, which prevents the propagation of tracking errors occurred in the previous frame. The experimental results show that the proposed method improves the face tracking performance on the videos with environmental changes such as illumination variation.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • Proceedings of the Korea Database Society Conference
    • /
    • 2001.06a
    • /
    • pp.331-341
    • /
    • 2001
  • 본 논문에서는 Naive Bayes 문서 분류기를 위한 새로운 학습모델을 제안한다. 이 모델에서는 라벨이 없는 문서들의 집합으로부터 선택한 적은 수의 학습 문서들을 이용하여 문서 분류기를 재학습한다. 본 논문에서는 이러한 학습 방법을 따를 경우 작은 비용으로도 문서 분류기의 정확도가 크게 향상될 수 있다는 사실을 보인다. 이와 같이, 알고리즘을 통해 라벨이 없는 문서들의 집합으로부터 정보량이 큰 문서를 선택한 후, 전문가가 이 문서에 라벨을 부여하는 방식으로 학습문서를 결정하는 것을 selective sampling이라 한다. 본 논문에서는 이러한 selective sampling 문제를 Naive Bayes 문서 분류기에 적용한다. 제안한 학습 방법에서는 라벨이 없는 문서들의 집합으로부터 재학습 문서를 선택하는 기준 측정치로서 평균절대편차(Mean Absolute Deviation), 엔트로피 측정치를 사용한다. 실험을 통해서 제안한 학습 방법이 기존의 방법인 신뢰도(Confidence measure)를 이용한 학습 방법보다 Naive Bayes 문서 분류기의 성능을 더 많이 향상시킨다는 사실을 보인다.

  • PDF

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • The Journal of Information Technology and Database
    • /
    • v.8 no.1
    • /
    • pp.95-104
    • /
    • 2001
  • In the text classification domain, labeling the training documents is an expensive process because it requires human expertise and is a tedious, time-consuming task. Therefore, it is important to reduce the manual labeling of training documents while improving the text classifier. Selective sampling, a form of active learning, reduces the number of training documents that needs to be labeled by examining the unlabeled documents and selecting the most informative ones for manual labeling. We apply this methodology to Naive Bayes, a text classifier renowned as a successful method in text classification. One of the most important issues in selective sampling is to determine the criterion when selecting the training documents from the large pool of unlabeled documents. In this paper, we propose two measures that would determine this criterion : the Mean Absolute Deviation (MAD) and the entropy measure. The experimental results, using Renters 21578 corpus, show that this proposed learning method improves Naive Bayes text classifier more than the existing ones.

  • PDF

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.

A Selective Induction Framework for Improving Prediction in Financial Markets

  • Kim, Sung Kun
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.3
    • /
    • pp.1-18
    • /
    • 2015
  • Financial markets are characterized by large numbers of complex and interacting factors which are ill-understood and frequently difficult to measure. Mathematical models developed in finance are precise formulations of theories of how these factors interact to produce the market value of financial asset. While these models are quite good at predicting these market values, because these forces and their interactions are not precisely understood, the model value nevertheless deviates to some extent from the observable market value. In this paper we propose a framework for augmenting the predictive capabilities of mathematical model with a learning component which is primed with an initial set of historical data and then adjusts its behavior after the event of prediction.

Mechanism and Application Methodology of Mental Practice (정신 연습의 기전과 적용 방법)

  • Kim Jong-soon;Lee Keun-heui;Bae Sung-soo
    • The Journal of Korean Physical Therapy
    • /
    • v.15 no.2
    • /
    • pp.75-84
    • /
    • 2003
  • The purpose of this study was to review of mechanism and application methodology about mental practice. The mental practice is symbolic rehearsal of physical activity in the absence of any gross muscular movements. Human have the ability to generate mental correlates of perceptual and motor events without any triggering external stimulus, a function known as imagery, Practice produces both internal and external sensory consequences which are thought to be essential for learning to occur, It is for this reason that mental practice, rehearsal of skill in imagination rather than by overt physical activity, has intrigued theorists, especially those interested in cognitive process. Several studies in sport psychology have shown that mental practice can be effective in optimizing the execution of movements in athletes and help novice learner in the incremental acquisition of new skilled behaviors. There are many theories of mental practice for explaining the positive effect In skill learning and performance. Most tenable theories are symbolic learning theory, psyconeuromuscular theory, Paivio's theory, regional cerebral blood flow theory, motivation theory, modeling theory, mental and muscle movement nodes theory, insight theory, selective attention theory, and attention-arousal set theory etc.. The factors for influencing to effects of mental practice are application form, application period, time for length of the mental practice, number of repetition, existence of physical practice.

  • PDF