• Title/Summary/Keyword: Training Datasets

Search Result 330, Processing Time 0.028 seconds

Optimizing Language Models through Dataset-Specific Post-Training: A Focus on Financial Sentiment Analysis (데이터 세트별 Post-Training을 통한 언어 모델 최적화 연구: 금융 감성 분석을 중심으로)

  • Hui Do Jung;Jae Heon Kim;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2024
  • This research investigates training methods for large language models to accurately identify sentiments and comprehend information about increasing and decreasing fluctuations in the financial domain. The main goal is to identify suitable datasets that enable these models to effectively understand expressions related to financial increases and decreases. For this purpose, we selected sentences from Wall Street Journal that included relevant financial terms and sentences generated by GPT-3.5-turbo-1106 for post-training. We assessed the impact of these datasets on language model performance using Financial PhraseBank, a benchmark dataset for financial sentiment analysis. Our findings demonstrate that post-training FinBERT, a model specialized in finance, outperformed the similarly post-trained BERT, a general domain model. Moreover, post-training with actual financial news proved to be more effective than using generated sentences, though in scenarios requiring higher generalization, models trained on generated sentences performed better. This suggests that aligning the model's domain with the domain of the area intended for improvement and choosing the right dataset are crucial for enhancing a language model's understanding and sentiment prediction accuracy. These results offer a methodology for optimizing language model performance in financial sentiment analysis tasks and suggest future research directions for more nuanced language understanding and sentiment analysis in finance. This research provides valuable insights not only for the financial sector but also for language model training across various domains.

A Study on Transferring Cloud Dataset for Smoke Extraction Based on Deep Learning (딥러닝 기반 연기추출을 위한 구름 데이터셋의 전이학습에 대한 연구)

  • Kim, Jiyong;Kwak, Taehong;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.695-706
    • /
    • 2022
  • Medium and high-resolution optical satellites have proven their effectiveness in detecting wildfire areas. However, smoke plumes generated by wildfire scatter visible light incidents on the surface, thereby interrupting accurate monitoring of the area where wildfire occurs. Therefore, a technology to extract smoke in advance is required. Deep learning technology is expected to improve the accuracy of smoke extraction, but the lack of training datasets limits the application. However, for clouds, which have a similar property of scattering visible light, a large amount of training datasets has been accumulated. The purpose of this study is to develop a smoke extraction technique using deep learning, and the limits due to the lack of datasets were overcome by using a cloud dataset on transfer learning. To check the effectiveness of transfer learning, a small-scale smoke extraction training set was made, and the smoke extraction performance was compared before and after applying transfer learning using a public cloud dataset. As a result, not only the performance in the visible light wavelength band was enhanced but also in the near infrared (NIR) and short-wave infrared (SWIR). Through the results of this study, it is expected that the lack of datasets, which is a critical limit for using deep learning on smoke extraction, can be solved, and therefore, through the advancement of smoke extraction technology, it will be possible to present an advantage in monitoring wildfires.

Small Sample Face Recognition Algorithm Based on Novel Siamese Network

  • Zhang, Jianming;Jin, Xiaokang;Liu, Yukai;Sangaiah, Arun Kumar;Wang, Jin
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1464-1479
    • /
    • 2018
  • In face recognition, sometimes the number of available training samples for single category is insufficient. Therefore, the performances of models trained by convolutional neural network are not ideal. The small sample face recognition algorithm based on novel Siamese network is proposed in this paper, which doesn't need rich samples for training. The algorithm designs and realizes a new Siamese network model, SiameseFacel, which uses pairs of face images as inputs and maps them to target space so that the $L_2$ norm distance in target space can represent the semantic distance in input space. The mapping is represented by the neural network in supervised learning. Moreover, a more lightweight Siamese network model, SiameseFace2, is designed to reduce the network parameters without losing accuracy. We also present a new method to generate training data and expand the number of training samples for single category in AR and labeled faces in the wild (LFW) datasets, which improves the recognition accuracy of the models. Four loss functions are adopted to carry out experiments on AR and LFW datasets. The results show that the contrastive loss function combined with new Siamese network model in this paper can effectively improve the accuracy of face recognition.

Comprehensive analysis of deep learning-based target classifiers in small and imbalanced active sonar datasets (소량 및 불균형 능동소나 데이터세트에 대한 딥러닝 기반 표적식별기의 종합적인 분석)

  • Geunhwan Kim;Youngsang Hwang;Sungjin Shin;Juho Kim;Soobok Hwang;Youngmin Choo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.329-344
    • /
    • 2023
  • In this study, we comprehensively analyze the generalization performance of various deep learning-based active sonar target classifiers when applied to small and imbalanced active sonar datasets. To generate the active sonar datasets, we use data from two different oceanic experiments conducted at different times and ocean. Each sample in the active sonar datasets is a time-frequency domain image, which is extracted from audio signal of contact after the detection process. For the comprehensive analysis, we utilize 22 Convolutional Neural Networks (CNN) models. Two datasets are used as train/validation datasets and test datasets, alternatively. To calculate the variance in the output of the target classifiers, the train/validation/test datasets are repeated 10 times. Hyperparameters for training are optimized using Bayesian optimization. The results demonstrate that shallow CNN models show superior robustness and generalization performance compared to most of deep CNN models. The results from this paper can serve as a valuable reference for future research directions in deep learning-based active sonar target classification.

On the Performance of Cuckoo Search and Bat Algorithms Based Instance Selection Techniques for SVM Speed Optimization with Application to e-Fraud Detection

  • AKINYELU, Andronicus Ayobami;ADEWUMI, Aderemi Oluyinka
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1348-1375
    • /
    • 2018
  • Support Vector Machine (SVM) is a well-known machine learning classification algorithm, which has been widely applied to many data mining problems, with good accuracy. However, SVM classification speed decreases with increase in dataset size. Some applications, like video surveillance and intrusion detection, requires a classifier to be trained very quickly, and on large datasets. Hence, this paper introduces two filter-based instance selection techniques for optimizing SVM training speed. Fast classification is often achieved at the expense of classification accuracy, and some applications, such as phishing and spam email classifiers, are very sensitive to slight drop in classification accuracy. Hence, this paper also introduces two wrapper-based instance selection techniques for improving SVM predictive accuracy and training speed. The wrapper and filter based techniques are inspired by Cuckoo Search Algorithm and Bat Algorithm. The proposed techniques are validated on three popular e-fraud types: credit card fraud, spam email and phishing email. In addition, the proposed techniques are validated on 20 other datasets provided by UCI data repository. Moreover, statistical analysis is performed and experimental results reveals that the filter-based and wrapper-based techniques significantly improved SVM classification speed. Also, results reveal that the wrapper-based techniques improved SVM predictive accuracy in most cases.

Robust URL Phishing Detection Based on Deep Learning

  • Al-Alyan, Abdullah;Al-Ahmadi, Saad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.7
    • /
    • pp.2752-2768
    • /
    • 2020
  • Phishing websites can have devastating effects on governmental, financial, and social services, as well as on individual privacy. Currently, many phishing detection solutions are evaluated using small datasets and, thus, are prone to sampling issues, such as representing legitimate websites by only high-ranking websites, which could make their evaluation less relevant in practice. Phishing detection solutions which depend only on the URL are attractive, as they can be used in limited systems, such as with firewalls. In this paper, we present a URL-only phishing detection solution based on a convolutional neural network (CNN) model. The proposed CNN takes the URL as the input, rather than using predetermined features such as URL length. For training and evaluation, we have collected over two million URLs in a massive URL phishing detection (MUPD) dataset. We split MUPD into training, validation and testing datasets. The proposed CNN achieves approximately 96% accuracy on the testing dataset; this accuracy is achieved with URL schemes (such as HTTP and HTTPS) removed from the URL. Our proposed solution achieved better accuracy compared to an existing state-of-the-art URL-only model on a published dataset. Finally, the results of our experiment suggest keeping the CNN up-to-date for better results in practice.

An overview of deep learning in the field of dentistry

  • Hwang, Jae-Joon;Jung, Yun-Hoa;Cho, Bong-Hae;Heo, Min-Suk
    • Imaging Science in Dentistry
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Purpose: Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods: A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results: Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion: Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.

Privacy-Preserving Deep Learning using Collaborative Learning of Neural Network Model

  • Hye-Kyeong Ko
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.56-66
    • /
    • 2023
  • The goal of deep learning is to extract complex features from multidimensional data use the features to create models that connect input and output. Deep learning is a process of learning nonlinear features and functions from complex data, and the user data that is employed to train deep learning models has become the focus of privacy concerns. Companies that collect user's sensitive personal information, such as users' images and voices, own this data for indefinite period of times. Users cannot delete their personal information, and they cannot limit the purposes for which the data is used. The study has designed a deep learning method that employs privacy protection technology that uses distributed collaborative learning so that multiple participants can use neural network models collaboratively without sharing the input datasets. To prevent direct leaks of personal information, participants are not shown the training datasets during the model training process, unlike traditional deep learning so that the personal information in the data can be protected. The study used a method that can selectively share subsets via an optimization algorithm that is based on modified distributed stochastic gradient descent, and the result showed that it was possible to learn with improved learning accuracy while protecting personal information.

Video augmentation technique for human action recognition using genetic algorithm

  • Nida, Nudrat;Yousaf, Muhammad Haroon;Irtaza, Aun;Velastin, Sergio A.
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.327-338
    • /
    • 2022
  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.