• 제목/요약/키워드: pretrained

검색결과 88건 처리시간 0.028초

딥러닝 사전학습 언어모델 기술 동향 (Recent R&D Trends for Pretrained Language Model)

  • 임준호;김현기;김영길
    • 전자통신동향분석
    • /
    • 제35권3호
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

Korean automatic spacing using pretrained transformer encoder and analysis

  • Hwang, Taewook;Jung, Sangkeun;Roh, Yoon-Hyung
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1049-1057
    • /
    • 2021
  • Automatic spacing in Korean is used to correct spacing units in a given input sentence. The demand for automatic spacing has been increasing owing to frequent incorrect spacing in recent media, such as the Internet and mobile networks. Therefore, herein, we propose a transformer encoder that reads a sentence bidirectionally and can be pretrained using an out-of-task corpus. Notably, our model exhibited the highest character accuracy (98.42%) among the existing automatic spacing models for Korean. We experimentally validated the effectiveness of bidirectional encoding and pretraining for automatic spacing in Korean. Moreover, we conclude that pretraining is more important than fine-tuning and data size.

Wild Image Object Detection using a Pretrained Convolutional Neural Network

  • Park, Sejin;Moon, Young Shik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권6호
    • /
    • pp.366-371
    • /
    • 2014
  • This paper reports a machine learning approach for image object detection. Object detection and localization in a wild image, such as a STL-10 image dataset, is very difficult to implement using the traditional computer vision method. A convolutional neural network is a good approach for such wild image object detection. This paper presents an object detection application using a convolutional neural network with pretrained feature vector. This is a very simple and well organized hierarchical object abstraction model.

딥러닝 모델을 이용한 휴대용 무선 초음파 영상에서의 경동맥 내중막 두께 자동 분할 알고리즘 개발 (Development of Automatic Segmentation Algorithm of Intima-media Thickness of Carotid Artery in Portable Ultrasound Image Based on Deep Learning)

  • 최자영;김영재;유경민;장영우;정욱진;김광기
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권3호
    • /
    • pp.100-106
    • /
    • 2021
  • Measuring Intima-media thickness (IMT) with ultrasound images can help early detection of coronary artery disease. As a result, numerous machine learning studies have been conducted to measure IMT. However, most of these studies require several steps of pre-treatment to extract the boundary, and some require manual intervention, so they are not suitable for on-site treatment in urgent situations. in this paper, we propose to use deep learning networks U-Net, Attention U-Net, and Pretrained U-Net to automatically segment the intima-media complex. This study also applied the HE, HS, and CLAHE preprocessing technique to wireless portable ultrasound diagnostic device images. As a result, The average dice coefficient of HE applied Models is 71% and CLAHE applied Models is 70%, while the HS applied Models have improved as 72% dice coefficient. Among them, Pretrained U-Net showed the highest performance with an average of 74%. When comparing this with the mean value of IMT measured by Conventional wired ultrasound equipment, the highest correlation coefficient value was shown in the HS applied pretrained U-Net.

Impairments of Learning and Memory Following Intracerebroventricular Administration of AF64A in Rats

  • Lim, Dong-Koo;Oh, Youm-Hee;Kim, Han-Soo
    • Archives of Pharmacal Research
    • /
    • 제24권3호
    • /
    • pp.234-239
    • /
    • 2001
  • Three types of learning and memory tests (Morris water maze, active and passive avoidance) were performed in rats following intracerebroventricular infusion of ethylcholine aziridium (AF64A). In Morris water maze, AF64A-treated rats showed the delayed latencies to find the platform iron 6th day after the infusion. In pretrained rats, AF64A caused the significant delay of latency at 7th days but not 8th day. In the active avoidance for the pretrained rats, the escape latency was significantly delayed in AF64A-treatment. The percentages of avoidance in AF64A-treated rats were less increased than those in the control. Especially, the percentage of no response in the AF64A-treated rats was markedly increased in the first half trials. In the passive avoidance, AF64A-treated rats shortened the latency 1.5 h after the electronic shock, but not 24 h. AF64A also caused the pretrained rats to shorten the latency 7th day after the infusion, but not 8th day. These results indicate that AF64A might impair the learning and memory. However, these results indicate that the disturbed memory by AF64A might rapidly recover after the first retrain. Furthermore, these results suggest that AF64A may be a useful agent for the animal model of learning for Spatial cognition .

  • PDF

사전 학습된 VGGNet 모델을 이용한 비접촉 장문 인식 (Contactless Palmprint Identification Using the Pretrained VGGNet Model)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1439-1447
    • /
    • 2018
  • Palm image acquisition without contact has advantages in user convenience and hygienic issues, but such images generally display more image variations than those acquired employing a contact plate or pegs. Therefore, it is necessary to develop a palmprint identification method which is robust to affine variations. This study proposes a deep learning approach which can effectively identify contactless palmprints. In general, it is very difficult to collect enough volume of palmprint images for training a deep convolutional neural network(DCNN). So we adopted an approach to use a pretrained DCNN. We designed two new DCNNs based on the VGGNet. One combines the VGGNet with SVM. The other add a shallow network on the middle-level of the VGGNet. The experimental results with two public palmprint databases show that the proposed method performs well not only contact-based palmprints but also contactless palmprints.

사전학습 모델을 이용한 음식업종 고객 발화 의도 분류 분석 (Analysis of utterance intent classification of cutomer in the food industry using Pretrained Model)

  • 김준회;임희석
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2022년도 제66차 하계학술대회논문집 30권2호
    • /
    • pp.43-44
    • /
    • 2022
  • 기존 자연어 처리 모델은 문맥 단위 단어 임베딩을 처리하지 못하는 한계점을 가지고 있는 한편 최근 BERT 기반 사전학습 모델들은 문장 단위 임베딩이 가능하고 사전학습을 통해 학습 효율이 비약적으로 개선되었다는 특징이 있다. 본 논문에서는 사전학습 언어 모델들을 이용하여 음식점, 배달전문점 등 음식 업종에서 발생한 고객 발화 의도를 분류하고 모델별 성능을 비교하여 최적의 모델을 제안하고자 한다. 연구결과, 사전학습 모델의 한국어 코퍼스와 Vocab 사이즈가 클수록 고객의 발화 의도를 잘 예측하였다. 한편, 본 연구에서 발화자의 의도를 크게 문의와 요청으로 구분하여 진행하였는데, 문의와 요청의 큰 차이점인 '물음표'를 제거한 후 성능을 비교해본 결과, 물음표가 존재할 때 발화자 의도 예측에 좋은 성능을 보였다. 이를 통해 음식 업종에서 발화자의 의도를 예측하는 시스템을 개발하고 챗봇 시스템 등에 활용한다면, 발화자의 의도에 적합한 서비스를 정확하게 적시에 제공할 수 있을 것으로 기대한다.

  • PDF

도시 환경에서의 이미지 분할 모델 대상 적대적 물리 공격 기법 (Adversarial Wall: Physical Adversarial Attack on Cityscape Pretrained Segmentation Model)

  • 수랸토 나우팔;라라사티 하라스타 타티마;김용수;김호원
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.402-404
    • /
    • 2022
  • Recent research has shown that deep learning models are vulnerable to adversarial attacks not only in the digital but also in the physical domain. This becomes very critical for applications that have a very high safety concern, such as self-driving cars. In this study, we propose a physical adversarial attack technique for one of the common tasks in self-driving cars, namely segmentation of the urban scene. Our method can create a texture on a wall so that it can be misclassified as a road. The demonstration of the technique on a state-of-the-art cityscape pretrained model shows a fairly high success rate, which should raise awareness of more potential attacks in self-driving cars.

사전학습모델을 활용한 수학학습 도구 자동 생성 시스템 (Automatic Generation System of Mathematical Learning Tools Using Pretrained Models)

  • 노명성
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2023년도 제68차 하계학술대회논문집 31권2호
    • /
    • pp.713-714
    • /
    • 2023
  • 본 논문에서는 사전학습모델을 활용한 수학학습 도구 자동 생성 시스템을 제안한다. 본 시스템은 사전학습모델을 활용하여 수학학습 도구를 교과과정 및 단원, 유형별로 다각화하여 자동 생성하고 사전학습모델을 자체 구축한 Dataset을 이용해 Fine-tuning하여 학생들에게 적절한 학습 도구와 적절치 않은 학습 도구를 분류하여 학습 도구의 품질을 높이었다. 본 시스템을 활용하여 학생들에게 양질의 수학학습 도구를 많은 양으로 제공해 줄 수 있는 초석을 다지었으며, 추후 AI 교과서와의 융합연구의 가능성도 열게 되었다.

  • PDF

Integration of Multi-scale CAM and Attention for Weakly Supervised Defects Localization on Surface Defective Apple

  • Nguyen Bui Ngoc Han;Ju Hwan Lee;Jin Young Kim
    • 스마트미디어저널
    • /
    • 제12권9호
    • /
    • pp.45-59
    • /
    • 2023
  • Weakly supervised object localization (WSOL) is a task of localizing an object in an image using only image-level labels. Previous studies have followed the conventional class activation mapping (CAM) pipeline. However, we reveal the current CAM approach suffers from problems which cause original CAM could not capture the complete defects features. This work utilizes a convolutional neural network (CNN) pretrained on image-level labels to generate class activation maps in a multi-scale manner to highlight discriminative regions. Additionally, a vision transformer (ViT) pretrained was treated to produce multi-head attention maps as an auxiliary detector. By integrating the CNN-based CAMs and attention maps, our approach localizes defective regions without requiring bounding box or pixel-level supervision during training. We evaluate our approach on a dataset of apple images with only image-level labels of defect categories. Experiments demonstrate our proposed method aligns with several Object Detection models performance, hold a promise for improving localization.