• Title/Summary/Keyword: Training examples

Search Result 243, Processing Time 0.025 seconds

A Co-training Method based on Classification Using Unlabeled Data (비분류표시 데이타를 이용하는 분류 기반 Co-training 방법)

  • 윤혜성;이상호;박승수;용환승;김주한
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.991-998
    • /
    • 2004
  • In many practical teaming problems including bioinformatics area, there is a small amount of labeled data along with a large pool of unlabeled data. Labeled examples are fairly expensive to obtain because they require human efforts. In contrast, unlabeled examples can be inexpensively gathered without an expert. A common method with unlabeled data for data classification and analysis is co-training. This method uses a small set of labeled examples to learn a classifier in two views. Then each classifier is applied to all unlabeled examples, and co-training detects the examples on which each classifier makes the most confident predictions. After some iterations, new classifiers are learned in training data and the number of labeled examples is increased. In this paper, we propose a new co-training strategy using unlabeled data. And we evaluate our method with two classifiers and two experimental data: WebKB and BIND XML data. Our experimentation shows that the proposed co-training technique effectively improves the classification accuracy when the number of labeled examples are very small.

Selection of An Initial Training Set for Active Learning Using Cluster-Based Sampling (능동적 학습을 위한 군집기반 초기훈련집합 선정)

  • 강재호;류광렬;권혁철
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.859-868
    • /
    • 2004
  • We propose a method of selecting initial training examples for active learning so that it can reach high accuracy faster with fewer further queries. Our method is based on the assumption that an active learner can reach higher performance when given an initial training set consisting of diverse and typical examples rather than similar and special ones. To obtain a good initial training set, we first cluster examples by using k-means clustering algorithm to find groups of similar examples. Then, a representative example, which is the closest example to the cluster's centroid, is selected from each cluster. After these representative examples are labeled by querying to the user for their categories, they can be used as initial training examples. We also suggest a method of using the centroids as initial training examples by labeling them with categories of corresponding representative examples. Experiments with various text data sets have shown that the active learner starting from the initial training set selected by our method reaches higher accuracy faster than that starting from randomly generated initial training set.

Effective Adversarial Training by Adaptive Selection of Loss Function in Federated Learning (연합학습에서의 손실함수의 적응적 선택을 통한 효과적인 적대적 학습)

  • Suchul Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.1-9
    • /
    • 2024
  • Although federated learning is designed to be safer than centralized methods in terms of security and privacy, it still has many vulnerabilities. An attacker performing an adversarial attack intentionally manipulates the deep learning model by injecting carefully crafted input data, that is, adversarial examples, into the client's training data to induce misclassification. A common defense strategy against this is so-called adversarial training, which involves preemptively learning the characteristics of adversarial examples into the model. Existing research assumes a scenario where all clients are under adversarial attack, but considering the number of clients in federated learning is very large, this is far from reality. In this paper, we experimentally examine aspects of adversarial training in a scenario where some of the clients are under attack. Through experiments, we found that there is a trade-off relationship in which the classification accuracy for normal samples decreases as the classification accuracy for adversarial examples increases. In order to effectively utilize this trade-off relationship, we present a method to perform adversarial training by adaptively selecting a loss function depending on whether the client is attacked.

An Active Co-Training Algorithm for Biomedical Named-Entity Recognition

  • Munkhdalai, Tsendsuren;Li, Meijing;Yun, Unil;Namsrai, Oyun-Erdene;Ryu, Keun Ho
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.575-588
    • /
    • 2012
  • Exploiting unlabeled text data with a relatively small labeled corpus has been an active and challenging research topic in text mining, due to the recent growth of the amount of biomedical literature. Biomedical named-entity recognition is an essential prerequisite task before effective text mining of biomedical literature can begin. This paper proposes an Active Co-Training (ACT) algorithm for biomedical named-entity recognition. ACT is a semi-supervised learning method in which two classifiers based on two different feature sets iteratively learn from informative examples that have been queried from the unlabeled data. We design a new classification problem to measure the informativeness of an example in unlabeled data. In this classification problem, the examples are classified based on a joint view of a feature set to be informative/non-informative to both classifiers. To form the training data for the classification problem, we adopt a query-by-committee method. Therefore, in the ACT, both classifiers are considered to be one committee, which is used on the labeled data to give the informativeness label to each example. The ACT method outperforms the traditional co-training algorithm in terms of f-measure as well as the number of training iterations performed to build a good classification model. The proposed method tends to efficiently exploit a large amount of unlabeled data by selecting a small number of examples having not only useful information but also a comprehensive pattern.

An Empirical Study on Factors for Effective Total Quality Management Education (효과적인 종합적 품질경영(TQM)교육 실행의 성공요인에 관한 연구)

  • 서창적;김재환
    • Journal of Korean Society for Quality Management
    • /
    • v.28 no.3
    • /
    • pp.68-81
    • /
    • 2000
  • In this paper, we studied the four stages of quality related education and training and identified alignment factors that have influence on successful TQM education and training. Based on extensive literature reviews the four stages are extracted such as quality concepts training, quality tools training, special topics training, and leadership training. Also we determine the alignment factors. A framewok of research model including above factors is developed and tested statistically. The perceived data are collected from managers of quality departments of 140 Korean firms through survey. The results show that alignment factors which achieve success in Quality related education training are using relevant examples and implementing training at the top in quality concepts training, providing time and opportunity to master skills in quality tools training, organizing courses into a logical curriculum in special topics training, and providing ongoing feedback in leadership training. We also offered numerous suggestions that can help organizations develop effective training programs to meet their objectives.

  • PDF

Generation and Selection of Nominal Virtual Examples for Improving the Classifier Performance (분류기 성능 향상을 위한 범주 속성 가상예제의 생성과 선별)

  • Lee, Yu-Jung;Kang, Byoung-Ho;Kang, Jae-Ho;Ryu, Kwang-Ryel
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1052-1061
    • /
    • 2006
  • This paper presents a method of using virtual examples to improve the classification accuracy for data with nominal attributes. Most of the previous researches on virtual examples focused on data with numeric attributes, and they used domain-specific knowledge to generate useful virtual examples for a particularly targeted learning algorithm. Instead of using domain-specific knowledge, our method samples virtual examples from a naive Bayesian network constructed from the given training set. A sampled example is considered useful if it contributes to the increment of the network's conditional likelihood when added to the training set. A set of useful virtual examples can be collected by repeating this process of sampling followed by evaluation. Experiments have shown that the virtual examples collected this way.can help various learning algorithms to derive classifiers of improved accuracy.

Cluster-Based Selection of Diverse Query Examples for Active Learning (능동적 학습을 위한 군집화 기반의 다양한 복수 문의 예제 선정 방법)

  • Kang, Jae-Ho;Ryu, Kwang-Ryel;Kwon, Hyuk-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.169-189
    • /
    • 2005
  • In order to derive a better classifier with a limited number of training examples, active teaming alternately repeats the querying stage fur category labeling and the subsequent learning stage fur rebuilding the calssifier with the newly expanded training set. To relieve the user from the burden of labeling, especially in an on-line environment, it is important to minimize the number of querying steps as well as the total number of query examples. We can derive a good classifier in a small number of querying steps by using only a small number of examples if we can select multiple of diverse, representative, and ambiguous examples to present to the user at each querying step. In this paper, we propose a cluster-based batch query selection method which can select diverse, representative, and highly ambiguous examples for efficient active learning. Experiments with various text data sets have shown that our method can derive a better classifier than other methods which only take into account the ambiguity as the criterion to select multiple query examples.

  • PDF

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.

Preform Design of Backward Extrusion Based on Inference of Analytical Knowledge (해석적 지식 추론을 통한 후방 압출푸의 예비 성형체 설계)

  • 김병민
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 1999.03b
    • /
    • pp.84-87
    • /
    • 1999
  • This paper presents a preform design method that combines the analytic method and inference of known knowledge with neural network. The analytic method is a finite element method that is used to simulate backward extrusion with pre-defined process parameters. The multi-layer network and back-propagation algorithm are utilized to learn the training examples from the simulation results. The design procedures are utilized to learn the training examples from the simulation results. The design procedures are two methods the first the neural network infer the deformed shape from the pre-defined processes parameters. The other the network infer the processes parameters from deformed shape. Especially the latest method is very useful to design the preform From the desired feature it is possible to determine the processes parameters such as friction stroke and tooling geometry. The proposed method is useful for shop floor to decide the processes parameters and preform shapes for producing sound product.

  • PDF

GENIE : A learning intelligent system engine based on neural adaptation and genetic search (GENIE : 신경망 적응과 유전자 탐색 기반의 학습형 지능 시스템 엔진)

  • 장병탁
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.27-34
    • /
    • 1996
  • GENIE is a learning-based engine for building intelligent systems. Learning in GENIE proceeds by incrementally modeling its human or technical environment using a neural network and a genetic algorithm. The neural network is used to represent the knowledge for solving a given task and has the ability to grow its structure. The genetic algorithm provides the neural network with training examples by actively exploring the example space of the problem. Integrated into the training examples by actively exploring the example space of the problem. Integrated into the GENIE system architecture, the genetic algorithm and the neural network build a virtually self-teaching autonomous learning system. This paper describes the structure of GENIE and its learning components. The performance is demonstrated on a robot learning problem. We also discuss the lessons learned from experiments with GENIE and point out further possibilities of effectively hybridizing genetic algorithms with neural networks and other softcomputing techniques.

  • PDF