• Title/Summary/Keyword: 트랜스포머 모델

Search Result 120, Processing Time 0.029 seconds

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.

A Pilot Study on Outpainting-powered Pet Pose Estimation (아웃페인팅 기반 반려동물 자세 추정에 관한 예비 연구)

  • Gyubin Lee;Youngchan Lee;Wonsang You
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.69-75
    • /
    • 2023
  • In recent years, there has been a growing interest in deep learning-based animal pose estimation, especially in the areas of animal behavior analysis and healthcare. However, existing animal pose estimation techniques do not perform well when body parts are occluded or not present. In particular, the occlusion of dog tail or ear might lead to a significant degradation of performance in pet behavior and emotion recognition. In this paper, to solve this intractable problem, we propose a simple yet novel framework for pet pose estimation where pet pose is predicted on an outpainted image where some body parts hidden outside the input image are reconstructed by the image inpainting network preceding the pose estimation network, and we performed a preliminary study to test the feasibility of the proposed approach. We assessed CE-GAN and BAT-Fill for image outpainting, and evaluated SimpleBaseline for pet pose estimation. Our experimental results show that pet pose estimation on outpainted images generated using BAT-Fill outperforms the existing methods of pose estimation on outpainting-less input image.

Circuit Model Based Analysis of a Wireless Energy Transfer System via Coupled Magnetic Resonances (결합된 자기공명을 통한 무선에너지 전력 전송 시스템의 회로 해석)

  • Cheon, Sang-Hoon;Kim, Yong-Hae;Lee, Myung-Lae;Kang, Seung-Youl
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.16 no.2
    • /
    • pp.137-144
    • /
    • 2011
  • A Simple equivalent circuit model is developed for a wireless energy transfer system via coupled magnetic resonances and a practical design method is also provided. Node equations for the resonance system are built with the method, expanding on the equations for a transformer, and the optimum distances of coils in the system are derived analytically for optimum coupling coefficients for high transfer efficiency. In order to calculate the frequency characteristics for a lossy system, the equivalent model is established at an electric design automation tool. The model parameters of the actual system are extracted and the modeling results are compared with measurements. Through the developed model, it is seen that the system can transfer power over a mid-range of a few meters and impedance matching is important to achieve high efficiency. This developed model can be used for a design and prediction on the similar systems such as increasing the number of receiving coils and receiving modules, etc.

Automatic Classification of Academic Articles Using BERT Model Based on Deep Learning (딥러닝 기반의 BERT 모델을 활용한 학술 문헌 자동분류)

  • Kim, In hu;Kim, Seong hee
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.3
    • /
    • pp.293-310
    • /
    • 2022
  • In this study, we analyzed the performance of the BERT-based document classification model by automatically classifying documents in the field of library and information science based on the KoBERT. For this purpose, abstract data of 5,357 papers in 7 journals in the field of library and information science were analyzed and evaluated for any difference in the performance of automatic classification according to the size of the learned data. As performance evaluation scales, precision, recall, and F scale were used. As a result of the evaluation, subject areas with large amounts of data and high quality showed a high level of performance with an F scale of 90% or more. On the other hand, if the data quality was low, the similarity with other subject areas was high, and there were few features that were clearly distinguished thematically, a meaningful high-level performance evaluation could not be derived. This study is expected to be used as basic data to suggest the possibility of using a pre-trained learning model to automatically classify the academic documents.

Intrusion Detection System based on Packet Payload Analysis using Transformer

  • Woo-Seung Park;Gun-Nam Kim;Soo-Jin Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.81-87
    • /
    • 2023
  • Intrusion detection systems that learn metadata of network packets have been proposed recently. However these approaches require time to analyze packets to generate metadata for model learning, and time to pre-process metadata before learning. In addition, models that have learned specific metadata cannot detect intrusion by using original packets flowing into the network as they are. To address the problem, this paper propose a natural language processing-based intrusion detection system that detects intrusions by learning the packet payload as a single sentence without an additional conversion process. To verify the performance of our approach, we utilized the UNSW-NB15 and Transformer models. First, the PCAP files of the dataset were labeled, and then two Transformer (BERT, DistilBERT) models were trained directly in the form of sentences to analyze the detection performance. The experimental results showed that the binary classification accuracy was 99.03% and 99.05%, respectively, which is similar or superior to the detection performance of the techniques proposed in previous studies. Multi-class classification showed better performance with 86.63% and 86.36%, respectively.

Performance Analysis of Deep Learning-based Normalization According to Input-output Structure and Neural Network Model (입출력구조와 신경망 모델에 따른 딥러닝 기반 정규화 기법의 성능 분석)

  • Changsoo Ryu;Geunhwan Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.4
    • /
    • pp.13-24
    • /
    • 2024
  • In this paper, we analyzed the performance of normalization according to various neural network models and input-output structures. For the analysis, a simulation-based dataset for noise environments with homogeneous and up to three interfering signals was used. As a result, the end-to-end structure that directly outputs noise variance showed superior performance when using a 1-D convolutional neural network and BiLSTM model, and was analyzed to be particularly robust against interference signals. This is because the 1-D convolutional neural network and bidirectional long short-term memory models have stronger inductive bias than the multilayer perceptron and transformer models. The analysis of this paper are expected to be used as a useful reference for future research on deep learning-based normalization.

Assessing Techniques for Advancing Land Cover Classification Accuracy through CNN and Transformer Model Integration (CNN 모델과 Transformer 조합을 통한 토지피복 분류 정확도 개선방안 검토)

  • Woo-Dam SIM;Jung-Soo LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.1
    • /
    • pp.115-127
    • /
    • 2024
  • This research aimed to construct models with various structures based on the Transformer module and to perform land cover classification, thereby examining the applicability of the Transformer module. For the classification of land cover, the Unet model, which has a CNN structure, was selected as the base model, and a total of four deep learning models were constructed by combining both the encoder and decoder parts with the Transformer module. During the training process of the deep learning models, the training was repeated 10 times under the same conditions to evaluate the generalization performance. The evaluation of the classification accuracy of the deep learning models showed that the Model D, which utilized the Transformer module in both the encoder and decoder structures, achieved the highest overall accuracy with an average of approximately 89.4% and a Kappa coefficient average of about 73.2%. In terms of training time, models based on CNN were the most efficient. however, the use of Transformer-based models resulted in an average improvement of 0.5% in classification accuracy based on the Kappa coefficient. It is considered necessary to refine the model by considering various variables such as adjusting hyperparameters and image patch sizes during the integration process with CNN models. A common issue identified in all models during the land cover classification process was the difficulty in detecting small-scale objects. To improve this misclassification phenomenon, it is deemed necessary to explore the use of high-resolution input data and integrate multidimensional data that includes terrain and texture information.

A Study of Pre-trained Language Models for Korean Language Generation (한국어 자연어생성에 적합한 사전훈련 언어모델 특성 연구)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.309-328
    • /
    • 2022
  • This study empirically analyzed a Korean pre-trained language models (PLMs) designed for natural language generation. The performance of two PLMs - BART and GPT - at the task of abstractive text summarization was compared. To investigate how performance depends on the characteristics of the inference data, ten different document types, containing six types of informational content and creation content, were considered. It was found that BART (which can both generate and understand natural language) performed better than GPT (which can only generate). Upon more detailed examination of the effect of inference data characteristics, the performance of GPT was found to be proportional to the length of the input text. However, even for the longest documents (with optimal GPT performance), BART still out-performed GPT, suggesting that the greatest influence on downstream performance is not the size of the training data or PLMs parameters but the structural suitability of the PLMs for the applied downstream task. The performance of different PLMs was also compared through analyzing parts of speech (POS) shares. BART's performance was inversely related to the proportion of prefixes, adjectives, adverbs and verbs but positively related to that of nouns. This result emphasizes the importance of taking the inference data's characteristics into account when fine-tuning a PLMs for its intended downstream task.

Neural Machine translation specialized for Coronavirus Disease-19(COVID-19) (Coronavirus Disease-19(COVID-19)에 특화된 인공신경망 기계번역기)

  • Park, Chan-Jun;Kim, Kyeong-Hee;Park, Ki-Nam;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.9
    • /
    • pp.7-13
    • /
    • 2020
  • With the recent World Health Organization (WHO) Declaration of Pandemic for Coronavirus Disease-19 (COVID-19), COVID-19 is a global concern and many deaths continue. To overcome this, there is an increasing need for sharing information between countries and countermeasures related to COVID-19. However, due to linguistic boundaries, smooth exchange and sharing of information has not been achieved. In this paper, we propose a Neural Machine Translation (NMT) model specialized for the COVID-19 domain. Centering on English, a Transformer based bidirectional model was produced for French, Spanish, German, Italian, Russian, and Chinese. Based on the BLEU score, the experimental results showed significant high performance in all language pairs compared to the commercialization system.

Deep Learning-based Korean Dialect Machine Translation Research Considering Linguistics Features and Service (언어적 특성과 서비스를 고려한 딥러닝 기반 한국어 방언 기계번역 연구)

  • Lim, Sangbeom;Park, Chanjun;Yang, Yeongwook
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.21-29
    • /
    • 2022
  • Based on the importance of dialect research, preservation, and communication, this paper conducted a study on machine translation of Korean dialects for dialect users who may be marginalized. For the dialect data used, AIHUB dialect data distributed based on the highest administrative district was used. We propose a many-to-one dialect machine translation that promotes the efficiency of model distribution and modeling research to improve the performance of the dialect machine translation by applying Copy mechanism. This paper evaluates the performance of the one-to-one model and the many-to-one model as a BLEU score, and analyzes the performance of the many-to-one model in the Korean dialect from a linguistic perspective. The performance improvement of the one-to-one machine translation by applying the methodology proposed in this paper and the significant high performance of the many-to-one machine translation were derived.