• Title/Summary/Keyword: 트랜스포머 모델

Search Result 115, Processing Time 0.026 seconds

How are they layerwisely 'surprised', KoBERT and KR-BERT? (KoBERT와 KR-BERT의 은닉층별 통사 및 의미 처리 성능 평가)

  • Choi, Sunjoo;Park, Myung-Kwan;Kim, Euhee
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.340-345
    • /
    • 2021
  • 최근 많은 연구들이 BERT를 활용하여, 주어진 문맥에서 언어학/문법적으로 적절하지 않은 단어를 인지하고 찾아내는 성과를 보고하였다. 하지만 일반적으로 딥러닝 관점에서 NLL기법(Negative log-likelihood)은 주어진 문맥에서 언어 변칙에 대한 정확한 성격을 규명하기에는 어려움이 있다고 지적되고 있다. 이러한 한계를 해결하기 위하여, Li et al.(2021)은 트랜스포머 언어모델의 은닉층별 밀도 추정(density estimation)을 통한 가우시안 확률 분포를 활용하는 가우시안 혼합 모델(Gaussian Mixture Model)을 적용하였다. 그들은 트랜스포머 언어모델이 언어 변칙 예문들의 종류에 따라 상이한 메커니즘을 사용하여 처리한다는 점을 보고하였다. 이 선행 연구를 받아들여 본 연구에서는 한국어 기반 언어모델인 KoBERT나 KR-BERT도 과연 한국어의 상이한 유형의 언어 변칙 예문들을 다른 방식으로 처리할 수 있는지를 규명하고자 한다. 이를 위해, 본 연구에서는 한국어 형태통사적 그리고 의미적 변칙 예문들을 구성하였고, 이 예문들을 바탕으로 한국어 기반 모델들의 성능을 놀라움-갭(surprisal gap) 점수를 계산하여 평가하였다. 본 논문에서는 한국어 기반 모델들도 의미적 변칙 예문을 처리할 때보다 형태통사적 변칙 예문을 처리할 때 상대적으로 보다 더 높은 놀라움-갭 점수를 보여주고 있음을 발견하였다. 즉, 상이한 종류의 언어 변칙 예문들을 처리하기 위하여 다른 메커니즘을 활용하고 있음을 보였다.

  • PDF

Text Style Transfer of Non-parallel Data using Transformer and Discriminator (트랜스포머와 판별기를 이용한 비병렬 데이터의 텍스트 스타일 변환)

  • Park, Da-Sol;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.64-68
    • /
    • 2020
  • 텍스트 스타일 변환은 문장 내 컨텐츠는 유지하면서 문장의 스타일을 변경하는 것이다. 스타일의 정의가 모호하기 때문에 텍스트 스타일 변환에 대한 연구는 대부분 지도 학습으로 진행되어왔다. 본 논문에서는 병렬 데이터 구축이 되지 않은 데이터를 학습하기 위해 비병렬 데이터를 이용하여 스타일 변환을 시도한다. 트랜스포머 기반의 문장 생성기를 이용하여 문장을 생성하고, 해당 스타일을 분류하는 판별기로 이루어진 모델을 제안한다. 제안 모델을 통해, 감정 변환의 성능은 정확도(Accuracy) 56.9%, self-BLEU 0.393(긍정→부정), 0.366(부정→긍정), 유창성(fluency) 798.23(긍정→부정), 1381.05(부정→긍정)을 보였다. 본 연구는 비병렬 데이터에 대해 스타일 변환을 적용함으로써, 병렬 데이터가 없는 다양한 도메인에도 적용가능 할 것이다.

  • PDF

Multi-View 3D Human Pose Estimation Based on Transformer (트랜스포머 기반의 다중 시점 3차원 인체자세추정)

  • Seoung Wook Choi;Jin Young Lee;Gye Young Kim
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.48-56
    • /
    • 2023
  • The technology of Three-dimensional human posture estimation is used in sports, motion recognition, and special effects of video media. Among various methods for this, multi-view 3D human pose estimation is essential for precise estimation even in complex real-world environments. But Existing models for multi-view 3D human posture estimation have the disadvantage of high order of time complexity as they use 3D feature maps. This paper proposes a method to extend an existing monocular viewpoint multi-frame model based on Transformer with lower time complexity to 3D human posture estimation for multi-viewpoints. To expand to multi-viewpoints our proposed method first generates an 8-dimensional joint coordinate that connects 2-dimensional joint coordinates for 17 joints at 4-vieiwpoints acquired using the 2-dimensional human posture detector, CPN(Cascaded Pyramid Network). This paper then converts them into 17×32 data with patch embedding, and enters the data into a transformer model, finally. Consequently, the MLP(Multi-Layer Perceptron) block that outputs the 3D-human posture simultaneously updates the 3D human posture estimation for 4-viewpoints at every iteration. Compared to Zheng[5]'s method the number of model parameters of the proposed method was 48.9%, MPJPE(Mean Per Joint Position Error) was reduced by 20.6 mm (43.8%) and the average learning time per epoch was more than 20 times faster.

  • PDF

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Cross-Domain Recommendation based on K-Means Clustering and Transformer (K-means 클러스터링과 트랜스포머 기반의 교차 도메인 추천)

  • Tae-Hoon Kim;Young-Gon Kim;Jeong-Min Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.1-8
    • /
    • 2023
  • Cross-domain recommendation is a method that shares related user information data and item data in different domains. It is mainly used in online shopping malls with many users or multimedia service contents, such as YouTube or Netflix. Through K-means clustering, embeddings are created by performing clustering based on user data and ratings. After learning the result through a transformer network, user satisfaction is predicted. Then, items suitable for the user are recommended using a transformer-based recommendation model. Through this study, it was shown through experiments that recommendations can predict cold-start problems at a lesser time cost and increase user satisfaction.

Knowledge Distillation based-on Internal/External Correlation Learning

  • Hun-Beom Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.31-39
    • /
    • 2023
  • In this paper, we propose an Internal/External Knowledge Distillation (IEKD), which utilizes both external correlations between feature maps of heterogeneous models and internal correlations between feature maps of the same model for transferring knowledge from a teacher model to a student model. To achieve this, we transform feature maps into a sequence format and extract new feature maps suitable for knowledge distillation by considering internal and external correlations through a transformer. We can learn both internal and external correlations by distilling the extracted feature maps and improve the accuracy of the student model by utilizing the extracted feature maps with feature matching. To demonstrate the effectiveness of our proposed knowledge distillation method, we achieved 76.23% Top-1 image classification accuracy on the CIFAR-100 dataset with the "ResNet-32×4/VGG-8" teacher and student combination and outperformed the state-of-the-art KD methods.

Design of Isolation-Type Matching Network for Underwater Acoustic Piezoelectric Transducer Using Chebyshev Filter Function (체비셰프 필터함수를 이용한 수중 음향 압전 트랜스듀서의 절연형 정합회로 설계)

  • Lee, Jeong-Min;Lee, Byung-Hwa;Baek, Kwang-Ryul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.491-498
    • /
    • 2009
  • This paper presents the design method of an impedance matching network using an isolation transformer and the Chebyshev filter function for the high efficiency and the flat power driving of an underwater acoustic piezoelectric transducer. The proposed impedance matching network is designed for minimizing the reactance component of transducer and having the flat power response in the wide frequency range. We design a low pass filter with ladder-type circuit using the Chebyshev function as standard prototype filter function. In addition, we design the impedance matching network which is suitable for the equivalent circuit of transducer and the turn ratio of transformer through the bandpass frequency transformation. The proposed method is applied to the simulated dummy load of the tonpilz-type transducer operating in the middle frequency range. The simulation results are compared with the measured characteristics and the validity of the proposed method is verified.

A study on the aspect-based sentiment analysis of multilingual customer reviews (다국어 사용자 후기에 대한 속성기반 감성분석 연구)

  • Sungyoung Ji;Siyoon Lee;Daewoo Choi;Kee-Hoon Kang
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.515-528
    • /
    • 2023
  • With the growth of the e-commerce market, consumers increasingly rely on user reviews to make purchasing decisions. Consequently, researchers are actively conducting studies to effectively analyze these reviews. Among the various methods of sentiment analysis, the aspect-based sentiment analysis approach, which examines user reviews from multiple angles rather than solely relying on simple positive or negative sentiments, is gaining widespread attention. Among the various methodologies for aspect-based sentiment analysis, there is an analysis method using a transformer-based model, which is the latest natural language processing technology. In this paper, we conduct an aspect-based sentiment analysis on multilingual user reviews using two real datasets from the latest natural language processing technology model. Specifically, we use restaurant data from the SemEval 2016 public dataset and multilingual user review data from the cosmetic domain. We compare the performance of transformer-based models for aspect-based sentiment analysis and apply various methodologies to improve their performance. Models using multilingual data are expected to be highly useful in that they can analyze multiple languages in one model without building separate models for each language.

A Study on Loss Landscape Affecting the Performance Generalization of Transformer (트랜스포머의 일반화 성능에 영향을 주는 로스 랜드스케이프 연구)

  • Choi, MinGi;Lee, So-Eun;Hou, Joug-Uk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.517-519
    • /
    • 2022
  • 뉴럴 네트워크는 학습에 사용하는 파라미터를 문제에 맞게 최적화하여 일반화 성능을 향상시키는 것이 목적이다. 선행 연구들은 다차원의 로스 랜드스케이프(loss landscape)를 시각화하는 방법을 탐구하며, 모델의 일반화 측면에서 어떤 영향을 주는지 탐구한다. 하지만 아직까지 로스 랜드스케이프가 근본적으로 일반화 성능에 어떠한 영향을 주는지 잘 알려져 있지 않으며, 평평하거나 경사진 로스 랜드스케이프 중 어떤 형태가 일반화 성능에 더 효과적인지 여러 의견이 나뉜다. 따라서 우리는 로스 랜드스케이프가 일반화 성능과 연관 있음을 실험을 통해 파악한다. 나아가 비전문제에서 MSA(multi-head self-attention) 레이어를 기반으로 구성된 트랜스포머 구조를 사용해 작은 유도 편향(inductive bias)을 가지며 소규모 데이터 셋 체제에서의 단점을 보완한다. 결론적으로 평평한 로스 랜드스케이프가 일반화 성능에 긍정적인 영향을 끼친다는 것을 관찰한다.