• Title/Summary/Keyword: Pre-training Dataset

Search Result 67, Processing Time 0.03 seconds

An Improved Deep Learning Method for Animal Images (동물 이미지를 위한 향상된 딥러닝 학습)

  • Wang, Guangxing;Shin, Seong-Yoon;Shin, Kwang-Weong;Lee, Hyun-Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.123-124
    • /
    • 2019
  • This paper proposes an improved deep learning method based on small data sets for animal image classification. Firstly, we use a CNN to build a training model for small data sets, and use data augmentation to expand the data samples of the training set. Secondly, using the pre-trained network on large-scale datasets, such as VGG16, the bottleneck features in the small dataset are extracted and to be stored in two NumPy files as new training datasets and test datasets. Finally, training a fully connected network with the new datasets. In this paper, we use Kaggle famous Dogs vs Cats dataset as the experimental dataset, which is a two-category classification dataset.

  • PDF

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

No-Reference Image Quality Assessment based on Quality Awareness Feature and Multi-task Training

  • Lai, Lijing;Chu, Jun;Leng, Lu
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.75-86
    • /
    • 2022
  • The existing image quality assessment (IQA) datasets have a small number of samples. Some methods based on transfer learning or data augmentation cannot make good use of image quality-related features. A No Reference (NR)-IQA method based on multi-task training and quality awareness is proposed. First, single or multiple distortion types and levels are imposed on the original image, and different strategies are used to augment different types of distortion datasets. With the idea of weak supervision, we use the Full Reference (FR)-IQA methods to obtain the pseudo-score label of the generated image. Then, we combine the classification information of the distortion type, level, and the information of the image quality score. The ResNet50 network is trained in the pre-train stage on the augmented dataset to obtain more quality-aware pre-training weights. Finally, the fine-tuning stage training is performed on the target IQA dataset using the quality-aware weights to predicate the final prediction score. Various experiments designed on the synthetic distortions and authentic distortions datasets (LIVE, CSIQ, TID2013, LIVEC, KonIQ-10K) prove that the proposed method can utilize the image quality-related features better than the method using only single-task training. The extracted quality-aware features improve the accuracy of the model.

A DCT Learning Combined RRU-Net for the Image Splicing Forgery Detection (DCT 학습을 융합한 RRU-Net 기반 이미지 스플라이싱 위조 영역 탐지 모델)

  • Young-min Seo;Jung-woo Han;Hee-jung Kwon;Su-bin Lee;Joongjin Kook
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.11-17
    • /
    • 2023
  • This paper proposes a lightweight deep learning network for detecting an image splicing forgery. The research on image forgery detection using CNN, a deep learning network, and research on detecting and localizing forgery in pixel units are in progress. Among them, CAT-Net, which learns the discrete cosine transform coefficients of images together with images, was released in 2022. The DCT coefficients presented by CAT-Net are combined with the JPEG artifact learning module and the backbone model as pre-learning, and the weights are fixed. The dataset used for pre-training is not included in the public dataset, and the backbone model has a relatively large number of network parameters, which causes overfitting in a small dataset, hindering generalization performance. In this paper, this learning module is designed to learn the characterization depending on the DCT domain in real-time during network training without pre-training. The DCT RRU-Net proposed in this paper is a network that combines RRU-Net which detects forgery by learning only images and JPEG artifact learning module. It is confirmed that the network parameters are less than those of CAT-Net, the detection performance of forgery is better than that of RRU-Net, and the generalization performance for various datasets improves through the network architecture and training method of DCT RRU-Net.

  • PDF

Classification of Gripping Movement in Daily Life Using EMG-based Spider Chart and Deep Learning (근전도 기반의 Spider Chart와 딥러닝을 활용한 일상생활 잡기 손동작 분류)

  • Lee, Seong Mun;Pi, Sheung Hoon;Han, Seung Ho;Jo, Yong Un;Oh, Do Chang
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.299-307
    • /
    • 2022
  • In this paper, we propose a pre-processing method that converts to Spider Chart image data for classification of gripping movement using EMG (electromyography) sensors and Convolution Neural Networks (CNN) deep learning. First, raw data for six hand gestures are extracted from five test subjects using an 8-channel armband and converted into Spider Chart data of octagonal shapes, which are divided into several sliding windows and are learned. In classifying six hand gestures, the classification performance is compared with the proposed pre-processing method and the existing methods. Deep learning was performed on the dataset by dividing 70% of the total into training, 15% as testing, and 15% as validation. For system performance evaluation, five cross-validations were applied by dividing 80% of the entire dataset by training and 20% by testing. The proposed method generates 97% and 94.54% in cross-validation and general tests, respectively, using the Spider Chart preprocessing, which was better results than the conventional methods.

Optimizing Language Models through Dataset-Specific Post-Training: A Focus on Financial Sentiment Analysis (데이터 세트별 Post-Training을 통한 언어 모델 최적화 연구: 금융 감성 분석을 중심으로)

  • Hui Do Jung;Jae Heon Kim;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.57-67
    • /
    • 2024
  • This research investigates training methods for large language models to accurately identify sentiments and comprehend information about increasing and decreasing fluctuations in the financial domain. The main goal is to identify suitable datasets that enable these models to effectively understand expressions related to financial increases and decreases. For this purpose, we selected sentences from Wall Street Journal that included relevant financial terms and sentences generated by GPT-3.5-turbo-1106 for post-training. We assessed the impact of these datasets on language model performance using Financial PhraseBank, a benchmark dataset for financial sentiment analysis. Our findings demonstrate that post-training FinBERT, a model specialized in finance, outperformed the similarly post-trained BERT, a general domain model. Moreover, post-training with actual financial news proved to be more effective than using generated sentences, though in scenarios requiring higher generalization, models trained on generated sentences performed better. This suggests that aligning the model's domain with the domain of the area intended for improvement and choosing the right dataset are crucial for enhancing a language model's understanding and sentiment prediction accuracy. These results offer a methodology for optimizing language model performance in financial sentiment analysis tasks and suggest future research directions for more nuanced language understanding and sentiment analysis in finance. This research provides valuable insights not only for the financial sector but also for language model training across various domains.

A Novel Transfer Learning-Based Algorithm for Detecting Violence Images

  • Meng, Yuyan;Yuan, Deyu;Su, Shaofan;Ming, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1818-1832
    • /
    • 2022
  • Violence in the Internet era poses a new challenge to the current counter-riot work, and according to research and analysis, most of the violent incidents occurring are related to the dissemination of violence images. The use of the popular deep learning neural network to automatically analyze the massive amount of images on the Internet has become one of the important tools in the current counter-violence work. This paper focuses on the use of transfer learning techniques and the introduction of an attention mechanism to the residual network (ResNet) model for the classification and identification of violence images. Firstly, the feature elements of the violence images are identified and a targeted dataset is constructed; secondly, due to the small number of positive samples of violence images, pre-training and attention mechanisms are introduced to suggest improvements to the traditional residual network; finally, the improved model is trained and tested on the constructed dedicated dataset. The research results show that the improved network model can quickly and accurately identify violence images with an average accuracy rate of 92.20%, thus effectively reducing the cost of manual identification and providing decision support for combating rebel organization activities.

Aircraft Recognition from Remote Sensing Images Based on Machine Vision

  • Chen, Lu;Zhou, Liming;Liu, Jinming
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.795-808
    • /
    • 2020
  • Due to the poor evaluation indexes such as detection accuracy and recall rate when Yolov3 network detects aircraft in remote sensing images, in this paper, we propose a remote sensing image aircraft detection method based on machine vision. In order to improve the target detection effect, the Inception module was introduced into the Yolov3 network structure, and then the data set was cluster analyzed using the k-means algorithm. In order to obtain the best aircraft detection model, on the basis of our proposed method, we adjusted the network parameters in the pre-training model and improved the resolution of the input image. Finally, our method adopted multi-scale training model. In this paper, we used remote sensing aircraft dataset of RSOD-Dataset to do experiments, and finally proved that our method improved some evaluation indicators. The experiment of this paper proves that our method also has good detection and recognition ability in other ground objects.

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Korean Machine Reading Comprehension for Patent Consultation Using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.145-152
    • /
    • 2020
  • MRC (Machine reading comprehension) is the AI NLP task that predict the answer for user's query by understanding of the relevant document and which can be used in automated consult services such as chatbots. Recently, the BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) model, which shows high performance in various fields of natural language processing, have two phases. First phase is Pre-training the big data of each domain. And second phase is fine-tuning the model for solving each NLP tasks as a prediction. In this paper, we have made the Patent MRC dataset and shown that how to build the patent consultation training data for MRC task. And we propose the method to improve the performance of the MRC task using the Pre-trained Patent-BERT model by the patent consultation corpus and the language processing algorithm suitable for the machine learning of the patent counseling data. As a result of experiment, we show that the performance of the method proposed in this paper is improved to answer the patent counseling query.