• Title/Summary/Keyword: Training Datasets

Search Result 330, Processing Time 0.03 seconds

Improving methods for normalizing biomedical text entities with concepts from an ontology with (almost) no training data at BLAH5 the CONTES

  • Ferre, Arnaud;Ba, Mouhamadou;Bossy, Robert
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.20.1-20.5
    • /
    • 2019
  • Entity normalization, or entity linking in the general domain, is an information extraction task that aims to annotate/bind multiple words/expressions in raw text with semantic references, such as concepts of an ontology. An ontology consists minimally of a formally organized vocabulary or hierarchy of terms, which captures knowledge of a domain. Presently, machine-learning methods, often coupled with distributional representations, achieve good performance. However, these require large training datasets, which are not always available, especially for tasks in specialized domains. CONTES (CONcept-TErm System) is a supervised method that addresses entity normalization with ontology concepts using small training datasets. CONTES has some limitations, such as it does not scale well with very large ontologies, it tends to overgeneralize predictions, and it lacks valid representations for the out-of-vocabulary words. Here, we propose to assess different methods to reduce the dimensionality in the representation of the ontology. We also propose to calibrate parameters in order to make the predictions more accurate, and to address the problem of out-of-vocabulary words, with a specific method.

Semi-Supervised Domain Adaptation on LiDAR 3D Object Detection with Self-Training and Knowledge Distillation (자가학습과 지식증류 방법을 활용한 LiDAR 3차원 물체 탐지에서의 준지도 도메인 적응)

  • Jungwan Woo;Jaeyeul Kim;Sunghoon Im
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.346-351
    • /
    • 2023
  • With the release of numerous open driving datasets, the demand for domain adaptation in perception tasks has increased, particularly when transferring knowledge from rich datasets to novel domains. However, it is difficult to solve the change 1) in the sensor domain caused by heterogeneous LiDAR sensors and 2) in the environmental domain caused by different environmental factors. We overcome domain differences in the semi-supervised setting with 3-stage model parameter training. First, we pre-train the model with the source dataset with object scaling based on statistics of the object size. Then we fine-tine the partially frozen model weights with copy-and-paste augmentation. The 3D points in the box labels are copied from one scene and pasted to the other scenes. Finally, we use the knowledge distillation method to update the student network with a moving average from the teacher network along with a self-training method with pseudo labels. Test-Time Augmentation with varying z values is employed to predict the final results. Our method achieved 3rd place in ECCV 2022 workshop on the 3D Perception for Autonomous Driving challenge.

High-Resolution Satellite Image Super-Resolution Using Image Degradation Model with MTF-Based Filters

  • Minkyung Chung;Minyoung Jung;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.395-407
    • /
    • 2023
  • Super-resolution (SR) has great significance in image processing because it enables downstream vision tasks with high spatial resolution. Recently, SR studies have adopted deep learning networks and achieved remarkable SR performance compared to conventional example-based methods. Deep-learning-based SR models generally require low-resolution (LR) images and the corresponding high-resolution (HR) images as training dataset. Due to the difficulties in obtaining real-world LR-HR datasets, most SR models have used only HR images and generated LR images with predefined degradation such as bicubic downsampling. However, SR models trained on simple image degradation do not reflect the properties of the images and often result in deteriorated SR qualities when applied to real-world images. In this study, we propose an image degradation model for HR satellite images based on the modulation transfer function (MTF) of an imaging sensor. Because the proposed method determines the image degradation based on the sensor properties, it is more suitable for training SR models on remote sensing images. Experimental results on HR satellite image datasets demonstrated the effectiveness of applying MTF-based filters to construct a more realistic LR-HR training dataset.

Training Method for Enhancing Classification Accuracy of Kuzushiji-MNIST/49 using Deep Learning based on CNN (CNN기반 딥러닝을 이용한 Kuzushiji-MNIST/49 분류의 정확도 향상을 위한 학습 방안)

  • Park, Byung-Seo;Lee, Sungyoung;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.355-363
    • /
    • 2020
  • In this paper, we propose a deep learning training method for accurately classifying Kuzushiji-MNIST and Kuzushiji-49 datasets for ancient and medieval Japanese characters. We analyze the latest convolutional neural network networks through experiments to select the most suitable network, and then use the networks to select the number of training to classify Kuzushiji-MNIST and Kuzushiji-49 datasets. In addition, the training is conducted with high accuracy by applying learning methods such as Mixup and Random Erase. As a result of the training, the accuracy of the proposed method can be shown to be high by 99.75% for MNIST, 99.07% for Kuzushiji-MNIST, and 97.56% for Kuzushiji-49. Through this deep learning-based technology, it is thought to provide a good research base for various researchers who study East Asian and Western history, literature, and culture.

Applying Token Tagging to Augment Dataset for Automatic Program Repair

  • Hu, Huimin;Lee, Byungjeong
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.628-636
    • /
    • 2022
  • Automatic program repair (APR) techniques focus on automatically repairing bugs in programs and providing correct patches for developers, which have been investigated for decades. However, most studies have limitations in repairing complex bugs. To overcome these limitations, we developed an approach that augments datasets by utilizing token tagging and applying machine learning techniques for APR. First, to alleviate the data insufficiency problem, we augmented datasets by extracting all the methods (buggy and non-buggy methods) in the program source code and conducting token tagging on non-buggy methods. Second, we fed the preprocessed code into the model as an input for training. Finally, we evaluated the performance of the proposed approach by comparing it with the baselines. The results show that the proposed approach is efficient for augmenting datasets using token tagging and is promising for APR.

An enhanced feature selection filter for classification of microarray cancer data

  • Mazumder, Dilwar Hussain;Veilumuthu, Ramachandran
    • ETRI Journal
    • /
    • v.41 no.3
    • /
    • pp.358-370
    • /
    • 2019
  • The main aim of this study is to select the optimal set of genes from microarray cancer datasets that contribute to the prediction of specific cancer types. This study proposes the enhancement of the feature selection filter algorithm based on Joe's normalized mutual information and its use for gene selection. The proposed algorithm is implemented and evaluated on seven benchmark microarray cancer datasets, namely, central nervous system, leukemia (binary), leukemia (3 class), leukemia (4 class), lymphoma, mixed lineage leukemia, and small round blue cell tumor, using five well-known classifiers, including the naive Bayes, radial basis function network, instance-based classifier, decision-based table, and decision tree. An average increase in the prediction accuracy of 5.1% is observed on all seven datasets averaged over all five classifiers. The average reduction in training time is 2.86 seconds. The performance of the proposed method is also compared with those of three other popular mutual information-based feature selection filters, namely, information gain, gain ratio, and symmetric uncertainty. The results are impressive when all five classifiers are used on all the datasets.

ETLi: Efficiently annotated traffic LiDAR dataset using incremental and suggestive annotation

  • Kang, Jungyu;Han, Seung-Jun;Kim, Nahyeon;Min, Kyoung-Wook
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.630-639
    • /
    • 2021
  • Autonomous driving requires a computerized perception of the environment for safety and machine-learning evaluation. Recognizing semantic information is difficult, as the objective is to instantly recognize and distinguish items in the environment. Training a model with real-time semantic capability and high reliability requires extensive and specialized datasets. However, generalized datasets are unavailable and are typically difficult to construct for specific tasks. Hence, a light detection and ranging semantic dataset suitable for semantic simultaneous localization and mapping and specialized for autonomous driving is proposed. This dataset is provided in a form that can be easily used by users familiar with existing two-dimensional image datasets, and it contains various weather and light conditions collected from a complex and diverse practical setting. An incremental and suggestive annotation routine is proposed to improve annotation efficiency. A model is trained to simultaneously predict segmentation labels and suggest class-representative frames. Experimental results demonstrate that the proposed algorithm yields a more efficient dataset than uniformly sampled datasets.

K-Means Clustering with Deep Learning for Fingerprint Class Type Prediction

  • Mukoya, Esther;Rimiru, Richard;Kimwele, Michael;Mashava, Destine
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.29-36
    • /
    • 2022
  • In deep learning classification tasks, most models frequently assume that all labels are available for the training datasets. As such strategies to learn new concepts from unlabeled datasets are scarce. In fingerprint classification tasks, most of the fingerprint datasets are labelled using the subject/individual and fingerprint datasets labelled with finger type classes are scarce. In this paper, authors have developed approaches of classifying fingerprint images using the majorly known fingerprint classes. Our study provides a flexible method to learn new classes of fingerprints. Our classifier model combines both the clustering technique and use of deep learning to cluster and hence label the fingerprint images into appropriate classes. The K means clustering strategy explores the label uncertainty and high-density regions from unlabeled data to be clustered. Using similarity index, five clusters are created. Deep learning is then used to train a model using a publicly known fingerprint dataset with known finger class types. A prediction technique is then employed to predict the classes of the clusters from the trained model. Our proposed model is better and has less computational costs in learning new classes and hence significantly saving on labelling costs of fingerprint images.

Validation of Semantic Segmentation Dataset for Autonomous Driving (승용자율주행을 위한 의미론적 분할 데이터셋 유효성 검증)

  • Gwak, Seoku;Na, Hoyong;Kim, Kyeong Su;Song, EunJi;Jeong, Seyoung;Lee, Kyewon;Jeong, Jihyun;Hwang, Sung-Ho
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.104-109
    • /
    • 2022
  • For autonomous driving research using AI, datasets collected from road environments play an important role. In other countries, various datasets such as CityScapes, A2D2, and BDD have already been released, but datasets suitable for the domestic road environment still need to be provided. This paper analyzed and verified the dataset reflecting the Korean driving environment. In order to verify the training dataset, the class imbalance was confirmed by comparing the number of pixels and instances of the dataset. A similar A2D2 dataset was trained with the same deep learning model, ConvNeXt, to compare and verify the constructed dataset. IoU was compared for the same class between two datasets with ConvNeXt and mIoU was compared. In this paper, it was confirmed that the collected dataset reflecting the driving environment of Korea is suitable for learning.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.