• Title/Summary/Keyword: Labeled Data

Search Result 468, Processing Time 0.024 seconds

Software Fault Prediction using Semi-supervised Learning Methods (세미감독형 학습 기법을 사용한 소프트웨어 결함 예측)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.127-133
    • /
    • 2019
  • Most studies of software fault prediction have been about supervised learning models that use only labeled training data. Although supervised learning usually shows high prediction performance, most development groups do not have sufficient labeled data. Unsupervised learning models that use only unlabeled data for training are difficult to build and show poor performance. Semi-supervised learning models that use both labeled data and unlabeled data can solve these problems. Self-training technique requires the fewest assumptions and constraints among semi-supervised techniques. In this paper, we implemented several models using self-training algorithms and evaluated them using Accuracy and AUC. As a result, YATSI showed the best performance.

Learning Deep Representation by Increasing ConvNets Depth for Few Shot Learning

  • Fabian, H.S. Tan;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.75-81
    • /
    • 2019
  • Though recent advancement of deep learning methods have provided satisfactory results from large data domain, somehow yield poor performance on few-shot classification tasks. In order to train a model with strong performance, i.e. deep convolutional neural network, it depends heavily on huge dataset and the labeled classes of the dataset can be extremely humongous. The cost of human annotation and scarcity of the data among the classes have drastically limited the capability of current image classification model. On the contrary, humans are excellent in terms of learning or recognizing new unseen classes with merely small set of labeled examples. Few-shot learning aims to train a classification model with limited labeled samples to recognize new classes that have neverseen during training process. In this paper, we increase the backbone depth of the embedding network in orderto learn the variation between the intra-class. By increasing the network depth of the embedding module, we are able to achieve competitive performance due to the minimized intra-class variation.

Technology-Focused Business Diversification Support Methodology Using Item Network (아이템 네트워크를 활용한 기술 중심 사업 다각화 기회 탐색 지원 방법론)

  • Bae, Kukjin;Kim, Ji-Eun;Kim, Namgyu
    • Journal of Information Technology Services
    • /
    • v.19 no.3
    • /
    • pp.17-34
    • /
    • 2020
  • Recently, various attempts have been made to discover promising items and technologies. However, there are very few data-driven approaches to support business diversification by companies with specific technologies. Therefore, there is a need for a methodology that can detect items related to a specific technology and recommend highly marketable items among them as business diversification targets. In this paper, we devise Labeled Item Network for Business Diversification Consulting Support System. Our research is performed with three sub-studies. In Sub-study 1, we find the proper source documents to build the item network and construct item dictionary. In Sub-study 2, we derive the Labeled Item Network and devise four index for item evaluation. Finally, we introduce the application scenario of our methodology and describe the result of real-case analysis in Sub-study 3. The Labeled Item Network, one of the main outcome of this study, can identify the relationships between items as well as the meaning of the relationship. We expect that more specific business item diversification opportunities can be found with the Labeled Item Network. The proposed methodology can help many SMEs diversify their business on the basis of their technology.

Class Specific Autoencoders Enhance Sample Diversity

  • Kumar, Teerath;Park, Jinbae;Ali, Muhammad Salman;Uddin, AFM Shahab;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.844-854
    • /
    • 2021
  • Semi-supervised learning (SSL) and few-shot learning (FSL) have shown impressive performance even then the volume of labeled data is very limited. However, SSL and FSL can encounter a significant performance degradation if the diversity gap between the labeled and unlabeled data is high. To reduce this diversity gap, we propose a novel scheme that relies on an autoencoder for generating pseudo examples. Specifically, the autoencoder is trained on a specific class using the available labeled data and the decoder of the trained autoencoder is then used to generate N samples of that specific class based on N random noise, sampled from a standard normal distribution. The above process is repeated for all the classes. Consequently, the generated data reduces the diversity gap and enhances the model performance. Extensive experiments on MNIST and FashionMNIST datasets for SSL and FSL verify the effectiveness of the proposed approach in terms of classification accuracy and robustness against adversarial attacks.

Improving Human Activity Recognition Model with Limited Labeled Data using Multitask Semi-Supervised Learning (제한된 라벨 데이터 상에서 다중-태스크 반 지도학습을 사용한 동작 인지 모델의 성능 향상)

  • Prabono, Aria Ghora;Yahya, Bernardo Nugroho;Lee, Seok-Lyong
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.137-147
    • /
    • 2018
  • A key to a well-performing human activity recognition (HAR) system through machine learning technique is the availability of a substantial amount of labeled data. Collecting sufficient labeled data is an expensive and time-consuming task. To build a HAR system in a new environment (i.e., the target domain) with very limited labeled data, it is unfavorable to naively exploit the data or trained classifier model from the existing environment (i.e., the source domain) as it is due to the domain difference. While traditional machine learning approaches are unable to address such distribution mismatch, transfer learning approach leverages the utilization of knowledge from existing well-established source domains that help to build an accurate classifier in the target domain. In this work, we propose a transfer learning approach to create an accurate HAR classifier with very limited data through the multitask neural network. The classifier loss function minimization for source and target domain are treated as two different tasks. The knowledge transfer is performed by simultaneously minimizing the loss function of both tasks using a single neural network model. Furthermore, we utilize the unlabeled data in an unsupervised manner to help the model training. The experiment result shows that the proposed work consistently outperforms existing approaches.

A Survey of Transfer and Multitask Learning in Bioinformatics

  • Xu, Qian;Yang, Qiang
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.257-268
    • /
    • 2011
  • Machine learning and data mining have found many applications in biological domains, where we look to build predictive models based on labeled training data. However, in practice, high quality labeled data is scarce, and to label new data incurs high costs. Transfer and multitask learning offer an attractive alternative, by allowing useful knowledge to be extracted and transferred from data in auxiliary domains helps counter the lack of data problem in the target domain. In this article, we survey recent advances in transfer and multitask learning for bioinformatics applications. In particular, we survey several key bioinformatics application areas, including sequence classification, gene expression data analysis, biological network reconstruction and biomedical applications.

Semi-supervised Learning for the Positioning of a Smartphone-based Robot (스마트폰 로봇의 위치 인식을 위한 준 지도식 학습 기법)

  • Yoo, Jaehyun;Kim, H. Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.6
    • /
    • pp.565-570
    • /
    • 2015
  • Supervised machine learning has become popular in discovering context descriptions from sensor data. However, collecting a large amount of labeled training data in order to guarantee good performance requires a great deal of expense and time. For this reason, semi-supervised learning has recently been developed due to its superior performance despite using only a small number of labeled data. In the existing semi-supervised learning algorithms, unlabeled data are used to build a graph Laplacian in order to represent an intrinsic data geometry. In this paper, we represent the unlabeled data as the spatial-temporal dataset by considering smoothly moving objects over time and space. The developed algorithm is evaluated for position estimation of a smartphone-based robot. In comparison with other state-of-art semi-supervised learning, our algorithm performs more accurate location estimates.

Semi-Supervised SAR Image Classification via Adaptive Threshold Selection (선별적인 임계값 선택을 이용한 준지도 학습의 SAR 분류 기술)

  • Jaejun Do;Minjung Yoo;Jaeseok Lee;Hyoi Moon;Sunok Kim
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.3
    • /
    • pp.319-328
    • /
    • 2024
  • Semi-supervised learning is a good way to train a classification model using a small number of labeled and large number of unlabeled data. We applied semi-supervised learning to a synthetic aperture radar(SAR) image classification model with a limited number of datasets that are difficult to create. To address the previous difficulties, semi-supervised learning uses a model trained with a small amount of labeled data to generate and learn pseudo labels. Besides, a lot of number of papers use a single fixed threshold to create pseudo labels. In this paper, we present a semi-supervised synthetic aperture radar(SAR) image classification method that applies different thresholds for each class instead of all classes sharing a fixed threshold to improve SAR classification performance with a small number of labeled datasets.

Utilizing Minimal Label Data for Tomato Leaf Disease Classification: An Approach through Recursive Learning Based on YOLOv8 (토마토 잎 병해 분류를 위한 최소 라벨 데이터 활용: YOLOv8 기반 재귀적 학습 방식을 통한 접근)

  • Junhyuk Lee;Namhyoung Kim
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.61-73
    • /
    • 2024
  • Class imbalance is one of the significant challenges in deep learning tasks, particularly pronounced in areas with limited data. This study proposes a new approach that utilizes minimal labeled data for effectively classifying tomato leaf diseases. We introduced a recursive learning method using the YOLOv8 model. By utilizing the detection predictions of images on the training data as additional training data, the number of labeled data is progressively increased. Unlike conventional data augmentation and up-down sampling techniques, this method seeks to fundamentally solve the class imbalance problem by maximizing the utility of actual data. Based on the secured labeled data, tomato leaves were extracted, and diseases were classified using the EfficientNet model. This process achieved a high accuracy of 98.92%. Notably, a 12.9% improvement compared to the baseline was observed in the detection of Late blight diseases, which has the least amount of data. This research presents a methodology that addresses data imbalance issues while offering high-precision disease classification, with the expectation of application to other crops.

Synthesis and evaluation of 64Cu-labeled avidin for lymph node imaging

  • Kang, Choong Mo;Kim, Hyunjung;Lee, Yong Jin;Choe, Yearn Seong
    • Journal of Radiopharmaceuticals and Molecular Probes
    • /
    • v.5 no.1
    • /
    • pp.54-60
    • /
    • 2019
  • Sentinel lymph node (SLN) imaging plays an important role in surgery of patients with breast cancer and melanoma. In this study, avidin (Av), a tetrameric protein glycosylated with mannose and N-acetylglucosamine molecules, was labeled with $^{64}Cu$ and then evaluated for LN imaging. $^{64}Cu$-Labeled $NeutrAvidin^{TM}$ (NAv), a non-glycosylated form of Av, was used for comparison. 1,4,7,10-Tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-conjugated Av and NAv were prepared from the corresponding proteins and DOTA-NHS ester, which were then labeled with copper-64 and purified using PD-10 columns. The numbers of DOTA molecules conjugated to Av and NAv were 4.9 and 3.3, respectively. [$^{64}Cu$]Cu-DOTA-conjugated Av and NAv were prepared in 93% and 73% radiochemical yields, respectively. In vitro serum stability study showed that copper-64 remained stable on all radiotracers for 24 h (>97%). MicroPET/CT images showed that high radioactivity was accumulated in LNs within 15 min after footpad-injection of radiotracers. Tissue distribution data of mice demonstrated significantly higher uptake in the popliteal (PO) LN than lumbar (LU) LN for $^{64}Cu$-labeled Av (relative % ID/g excluding the injection sites: 66.2% and 26.0%, respectively) compared with those of $^{64}Cu$-labeled NAv (43.0% and 49.2%, respectively). The results of this study suggest that mannose molecules on Av enabled the radiotracer to retain in the first LN after mouse footpad-injection.