• 제목/요약/키워드: Pretrained model

검색결과 70건 처리시간 0.028초

딥러닝 사전학습 언어모델 기술 동향 (Recent R&D Trends for Pretrained Language Model)

  • 임준호;김현기;김영길
    • 전자통신동향분석
    • /
    • 제35권3호
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

Korean automatic spacing using pretrained transformer encoder and analysis

  • Hwang, Taewook;Jung, Sangkeun;Roh, Yoon-Hyung
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1049-1057
    • /
    • 2021
  • Automatic spacing in Korean is used to correct spacing units in a given input sentence. The demand for automatic spacing has been increasing owing to frequent incorrect spacing in recent media, such as the Internet and mobile networks. Therefore, herein, we propose a transformer encoder that reads a sentence bidirectionally and can be pretrained using an out-of-task corpus. Notably, our model exhibited the highest character accuracy (98.42%) among the existing automatic spacing models for Korean. We experimentally validated the effectiveness of bidirectional encoding and pretraining for automatic spacing in Korean. Moreover, we conclude that pretraining is more important than fine-tuning and data size.

Wild Image Object Detection using a Pretrained Convolutional Neural Network

  • Park, Sejin;Moon, Young Shik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권6호
    • /
    • pp.366-371
    • /
    • 2014
  • This paper reports a machine learning approach for image object detection. Object detection and localization in a wild image, such as a STL-10 image dataset, is very difficult to implement using the traditional computer vision method. A convolutional neural network is a good approach for such wild image object detection. This paper presents an object detection application using a convolutional neural network with pretrained feature vector. This is a very simple and well organized hierarchical object abstraction model.

자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축 (E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model)

  • 최준영;임희석
    • 한국융합학회논문지
    • /
    • 제11권11호
    • /
    • pp.33-39
    • /
    • 2020
  • 자연어 처리 분야에서 번역, 형태소 태깅, 질의응답, 감성 분석등 다양한 영역의 연구가 활발히 진행되고 있다. 감성 분석 분야는 Pretrained Model을 전이 학습하여 단일 도메인 영어 데이터셋에 대해 높은 분류 정확도를 보여주고 있다. 본 연구에서는 다양한 도메인 속성을 가지고 있는 이커머스 한글 상품평 데이터를 이용하고 단어 빈도 기반의 BOW(Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], KoBERT[4] 모델을 구현하여 분류 성능을 비교하였다. 같은 단어를 동일하게 임베딩하는 모델에 비해 문맥에 따라 다르게 임베딩하는 전이학습 모델이 높은 정확도를 낸다는 것을 확인하였고, 17개 카테고리 별, 모델 성능 결과를 분석하여 실제 이커머스 산업에서 적용할 수 있는 감성 분석 모델 구성을 제안한다. 그리고 모델별 용량에 따른 추론 속도를 비교하여 실시간 서비스가 가능할 수 있는 모델 연구 방향을 제시한다.

Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구 (Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT))

  • 손수현;박찬준;이정섭;심미단;이찬희;박기남;임희석
    • 한국융합학회논문지
    • /
    • 제13권3호
    • /
    • pp.77-89
    • /
    • 2022
  • 대용량의 코퍼스로 학습한 사전학습 언어모델이 다양한 자연어처리 태스크에서 성능 향상에 도움을 주는 것은 많은 연구를 통해 증명되었다. 하지만 자원이 부족한 언어 환경에서 사전학습 언어모델 학습을 위한 대용량의 코퍼스를 구축하는데는 한계가 있다. 이러한 한계를 극복할 수 있는 Cross-lingual Post-Training (XPT) 방법론을 사용하여 비교적 자원이 부족한 한국어에서 해당 방법론의 효율성을 분석한다. XPT 방법론은 자원이 풍부한 영어의 사전학습 언어모델의 파라미터를 필요에 따라 선택적으로 재활용하여 사용하며 두 언어 사이의 관계를 학습하기 위해 적응계층을 사용한다. 이를 통해 관계추출 태스크에서 적은 양의 목표 언어 데이터셋만으로도 원시언어의 사전학습 모델보다 우수한 성능을 보이는 것을 확인한다. 더불어, 국내외 학계와 기업에서 공개한 한국어 사전학습 언어모델 및 한국어 multilingual 사전학습 모델에 대한 조사를 통해 각 모델의 특징을 분석한다

Alzheimer's disease recognition from spontaneous speech using large language models

  • Jeong-Uk Bang;Seung-Hoon Han;Byung-Ok Kang
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.96-105
    • /
    • 2024
  • We propose a method to automatically predict Alzheimer's disease from speech data using the ChatGPT large language model. Alzheimer's disease patients often exhibit distinctive characteristics when describing images, such as difficulties in recalling words, grammar errors, repetitive language, and incoherent narratives. For prediction, we initially employ a speech recognition system to transcribe participants' speech into text. We then gather opinions by inputting the transcribed text into ChatGPT as well as a prompt designed to solicit fluency evaluations. Subsequently, we extract embeddings from the speech, text, and opinions by the pretrained models. Finally, we use a classifier consisting of transformer blocks and linear layers to identify participants with this type of dementia. Experiments are conducted using the extensively used ADReSSo dataset. The results yield a maximum accuracy of 87.3% when speech, text, and opinions are used in conjunction. This finding suggests the potential of leveraging evaluation feedback from language models to address challenges in Alzheimer's disease recognition.

도시 환경에서의 이미지 분할 모델 대상 적대적 물리 공격 기법 (Adversarial Wall: Physical Adversarial Attack on Cityscape Pretrained Segmentation Model)

  • 수랸토 나우팔;라라사티 하라스타 타티마;김용수;김호원
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.402-404
    • /
    • 2022
  • Recent research has shown that deep learning models are vulnerable to adversarial attacks not only in the digital but also in the physical domain. This becomes very critical for applications that have a very high safety concern, such as self-driving cars. In this study, we propose a physical adversarial attack technique for one of the common tasks in self-driving cars, namely segmentation of the urban scene. Our method can create a texture on a wall so that it can be misclassified as a road. The demonstration of the technique on a state-of-the-art cityscape pretrained model shows a fairly high success rate, which should raise awareness of more potential attacks in self-driving cars.

Impairments of Learning and Memory Following Intracerebroventricular Administration of AF64A in Rats

  • Lim, Dong-Koo;Oh, Youm-Hee;Kim, Han-Soo
    • Archives of Pharmacal Research
    • /
    • 제24권3호
    • /
    • pp.234-239
    • /
    • 2001
  • Three types of learning and memory tests (Morris water maze, active and passive avoidance) were performed in rats following intracerebroventricular infusion of ethylcholine aziridium (AF64A). In Morris water maze, AF64A-treated rats showed the delayed latencies to find the platform iron 6th day after the infusion. In pretrained rats, AF64A caused the significant delay of latency at 7th days but not 8th day. In the active avoidance for the pretrained rats, the escape latency was significantly delayed in AF64A-treatment. The percentages of avoidance in AF64A-treated rats were less increased than those in the control. Especially, the percentage of no response in the AF64A-treated rats was markedly increased in the first half trials. In the passive avoidance, AF64A-treated rats shortened the latency 1.5 h after the electronic shock, but not 24 h. AF64A also caused the pretrained rats to shorten the latency 7th day after the infusion, but not 8th day. These results indicate that AF64A might impair the learning and memory. However, these results indicate that the disturbed memory by AF64A might rapidly recover after the first retrain. Furthermore, these results suggest that AF64A may be a useful agent for the animal model of learning for Spatial cognition .

  • PDF

최신 기계번역 사후 교정 연구 (Recent Automatic Post Editing Research)

  • 문현석;박찬준;어수경;서재형;임희석
    • 디지털융복합연구
    • /
    • 제19권7호
    • /
    • pp.199-208
    • /
    • 2021
  • 기계번역 사후교정이란, 기계번역 문장에 포함된 오류를 자동으로 교정하기 위해 제안된 연구 분야이다. 이는 번역 시스템과 관계없이 번역문의 품질을 높이는 오류 교정 모델을 생성하는 목적을 가진 연구로, 훈련을 위해 소스문장, 번역문, 그리고 이를 사람이 직접 교정한 문장이 활용된다. 특히, 최신 기계번역 사후교정 연구에서는 사후교정 데이터를 통한 학습을 진행하기 이전에, 사전학습된 다국어 언어모델을 활용하는 방법이 적용되고 있다. 이에 본 논문은 최신 연구들에서 활용되고 있는 다국어 사전학습 언어모델들과 함께, 해당 모델을 도입한 각 연구에서의 구체적인 적용방법을 소개한다. 나아가 이를 기반으로, 번역 모델과 mBART모델을 활용하는 향후 연구 방향을 제안한다.

사전 학습된 VGGNet 모델을 이용한 비접촉 장문 인식 (Contactless Palmprint Identification Using the Pretrained VGGNet Model)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1439-1447
    • /
    • 2018
  • Palm image acquisition without contact has advantages in user convenience and hygienic issues, but such images generally display more image variations than those acquired employing a contact plate or pegs. Therefore, it is necessary to develop a palmprint identification method which is robust to affine variations. This study proposes a deep learning approach which can effectively identify contactless palmprints. In general, it is very difficult to collect enough volume of palmprint images for training a deep convolutional neural network(DCNN). So we adopted an approach to use a pretrained DCNN. We designed two new DCNNs based on the VGGNet. One combines the VGGNet with SVM. The other add a shallow network on the middle-level of the VGGNet. The experimental results with two public palmprint databases show that the proposed method performs well not only contact-based palmprints but also contactless palmprints.