• Title/Summary/Keyword: Pretrained model

Search Result 64, Processing Time 0.027 seconds

Recent R&D Trends for Pretrained Language Model (딥러닝 사전학습 언어모델 기술 동향)

  • Lim, J.H.;Kim, H.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

Korean automatic spacing using pretrained transformer encoder and analysis

  • Hwang, Taewook;Jung, Sangkeun;Roh, Yoon-Hyung
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1049-1057
    • /
    • 2021
  • Automatic spacing in Korean is used to correct spacing units in a given input sentence. The demand for automatic spacing has been increasing owing to frequent incorrect spacing in recent media, such as the Internet and mobile networks. Therefore, herein, we propose a transformer encoder that reads a sentence bidirectionally and can be pretrained using an out-of-task corpus. Notably, our model exhibited the highest character accuracy (98.42%) among the existing automatic spacing models for Korean. We experimentally validated the effectiveness of bidirectional encoding and pretraining for automatic spacing in Korean. Moreover, we conclude that pretraining is more important than fine-tuning and data size.

Wild Image Object Detection using a Pretrained Convolutional Neural Network

  • Park, Sejin;Moon, Young Shik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.6
    • /
    • pp.366-371
    • /
    • 2014
  • This paper reports a machine learning approach for image object detection. Object detection and localization in a wild image, such as a STL-10 image dataset, is very difficult to implement using the traditional computer vision method. A convolutional neural network is a good approach for such wild image object detection. This paper presents an object detection application using a convolutional neural network with pretrained feature vector. This is a very simple and well organized hierarchical object abstraction model.

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Alzheimer's disease recognition from spontaneous speech using large language models

  • Jeong-Uk Bang;Seung-Hoon Han;Byung-Ok Kang
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.96-105
    • /
    • 2024
  • We propose a method to automatically predict Alzheimer's disease from speech data using the ChatGPT large language model. Alzheimer's disease patients often exhibit distinctive characteristics when describing images, such as difficulties in recalling words, grammar errors, repetitive language, and incoherent narratives. For prediction, we initially employ a speech recognition system to transcribe participants' speech into text. We then gather opinions by inputting the transcribed text into ChatGPT as well as a prompt designed to solicit fluency evaluations. Subsequently, we extract embeddings from the speech, text, and opinions by the pretrained models. Finally, we use a classifier consisting of transformer blocks and linear layers to identify participants with this type of dementia. Experiments are conducted using the extensively used ADReSSo dataset. The results yield a maximum accuracy of 87.3% when speech, text, and opinions are used in conjunction. This finding suggests the potential of leveraging evaluation feedback from language models to address challenges in Alzheimer's disease recognition.

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Adversarial Wall: Physical Adversarial Attack on Cityscape Pretrained Segmentation Model (도시 환경에서의 이미지 분할 모델 대상 적대적 물리 공격 기법)

  • Suryanto, Naufal;Larasati, Harashta Tatimma;Kim, Yongsu;Kim, Howon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.402-404
    • /
    • 2022
  • Recent research has shown that deep learning models are vulnerable to adversarial attacks not only in the digital but also in the physical domain. This becomes very critical for applications that have a very high safety concern, such as self-driving cars. In this study, we propose a physical adversarial attack technique for one of the common tasks in self-driving cars, namely segmentation of the urban scene. Our method can create a texture on a wall so that it can be misclassified as a road. The demonstration of the technique on a state-of-the-art cityscape pretrained model shows a fairly high success rate, which should raise awareness of more potential attacks in self-driving cars.

Impairments of Learning and Memory Following Intracerebroventricular Administration of AF64A in Rats

  • Lim, Dong-Koo;Oh, Youm-Hee;Kim, Han-Soo
    • Archives of Pharmacal Research
    • /
    • v.24 no.3
    • /
    • pp.234-239
    • /
    • 2001
  • Three types of learning and memory tests (Morris water maze, active and passive avoidance) were performed in rats following intracerebroventricular infusion of ethylcholine aziridium (AF64A). In Morris water maze, AF64A-treated rats showed the delayed latencies to find the platform iron 6th day after the infusion. In pretrained rats, AF64A caused the significant delay of latency at 7th days but not 8th day. In the active avoidance for the pretrained rats, the escape latency was significantly delayed in AF64A-treatment. The percentages of avoidance in AF64A-treated rats were less increased than those in the control. Especially, the percentage of no response in the AF64A-treated rats was markedly increased in the first half trials. In the passive avoidance, AF64A-treated rats shortened the latency 1.5 h after the electronic shock, but not 24 h. AF64A also caused the pretrained rats to shorten the latency 7th day after the infusion, but not 8th day. These results indicate that AF64A might impair the learning and memory. However, these results indicate that the disturbed memory by AF64A might rapidly recover after the first retrain. Furthermore, these results suggest that AF64A may be a useful agent for the animal model of learning for Spatial cognition .

  • PDF

Recent Automatic Post Editing Research (최신 기계번역 사후 교정 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.19 no.7
    • /
    • pp.199-208
    • /
    • 2021
  • Automatic Post Editing(APE) is the study that automatically correcting errors included in the machine translated sentences. The goal of APE task is to generate error correcting models that improve translation quality, regardless of the translation system. For training these models, source sentence, machine translation, and post edit, which is manually edited by human translator, are utilized. Especially in the recent APE research, multilingual pretrained language models are being adopted, prior to the training by APE data. This study deals with multilingual pretrained language models adopted to the latest APE researches, and the specific application method for each APE study. Furthermore, based on the current research trend, we propose future research directions utilizing translation model or mBART model.

Contactless Palmprint Identification Using the Pretrained VGGNet Model (사전 학습된 VGGNet 모델을 이용한 비접촉 장문 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1439-1447
    • /
    • 2018
  • Palm image acquisition without contact has advantages in user convenience and hygienic issues, but such images generally display more image variations than those acquired employing a contact plate or pegs. Therefore, it is necessary to develop a palmprint identification method which is robust to affine variations. This study proposes a deep learning approach which can effectively identify contactless palmprints. In general, it is very difficult to collect enough volume of palmprint images for training a deep convolutional neural network(DCNN). So we adopted an approach to use a pretrained DCNN. We designed two new DCNNs based on the VGGNet. One combines the VGGNet with SVM. The other add a shallow network on the middle-level of the VGGNet. The experimental results with two public palmprint databases show that the proposed method performs well not only contact-based palmprints but also contactless palmprints.