• Title/Summary/Keyword: Transformer Encoder

Search Result 49, Processing Time 0.02 seconds

Semantic Similarity Calculation based on Siamese TRAT (트랜스포머 인코더와 시암넷 결합한 시맨틱 유사도 알고리즘)

  • Lu, Xing-Cen;Joe, Inwhee
    • Annual Conference of KIPS
    • /
    • 2021.05a
    • /
    • pp.397-400
    • /
    • 2021
  • To solve the problem that existing computing methods cannot adequately represent the semantic features of sentences, Siamese TRAT, a semantic feature extraction model based on Transformer encoder is proposed. The transformer model is used to fully extract the semantic information within sentences and carry out deep semantic coding for sentences. In addition, the interactive attention mechanism is introduced to extract the similar features of the association between two sentences, which makes the model better at capturing the important semantic information inside the sentence. As a result, it improves the semantic understanding and generalization ability of the model. The experimental results show that the proposed model can improve the accuracy significantly for the semantic similarity calculation task of English and Chinese, and is more effective than the existing methods.

The implementation of the color component 2-D DWT Processor for the JPEG 2000 hard-wired encoder (JPEG 2000 Hard-wired Encoder를 위한 칼라 2-D DWT Processor의 구현)

  • Lee, Sung-Mok;Cho, Sung-Dae;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.4
    • /
    • pp.321-328
    • /
    • 2008
  • In this paper, we propose the hardware architecture of two-dimensional discrete wavelet transform (2D DWT) and quantization for using JPEG2000. Color 2-D DWT processor is proposed that is to apply to JPEG 2000 Hard-wired Encoder. JPEG 2000 DWT processor uses the Daubechies' (9,7) bi-orthogonal filter, and we design by minimizing error of the DWT transformer by ${\pm}1$ LSB during compression and decompression. We designed the DWT filters that using by using shift and adder structure instead of multiplier structure which raise the hardware complexity. It is improve the operation speed of filters and reduce the hardware complexity. The proposed system is designed by the hardware description language Verilog-HDL and verified by Synopsys Design Analyzer using TSMC 0.25${\mu}m$ ASIC library.

  • PDF

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

Unsupervised Abstractive Summarization Method that Suitable for Documents with Flows (흐름이 있는 문서에 적합한 비지도학습 추상 요약 방법)

  • Lee, Hoon-suk;An, Soon-hong;Kim, Seung-hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.501-512
    • /
    • 2021
  • Recently, a breakthrough has been made in the NLP area by Transformer techniques based on encoder-decoder. However, this only can be used in mainstream languages where millions of dataset are well-equipped, such as English and Chinese, and there is a limitation that it cannot be used in non-mainstream languages where dataset are not established. In addition, there is a deflection problem that focuses on the beginning of the document in mechanical summarization. Therefore, these methods are not suitable for documents with flows such as fairy tales and novels. In this paper, we propose a hybrid summarization method that does not require a dataset and improves the deflection problem using GAN with two adaptive discriminators. We evaluate our model on the CNN/Daily Mail dataset to verify an objective validity. Also, we proved that the model has valid performance in Korean, one of the non-mainstream languages.

LabVIEW Based Laboratory Typed Test Setup for the Determination of Induction Motor Performance Characteristics

  • Calis, Hakan;Caki, Eyup
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.1928-1934
    • /
    • 2014
  • Induction motors are widely used due to their rugged, robust and easy to care features. Since they are heavily used in industry, testing of three phase induction motors have play a vital role. In order to determine motor equivalent circuit parameters, efficiency of motor, squirrel caged laboratory sized an induction motor test setup is prepared. It is suitable for the induction motor with the frame size of 100 and 112. A virtual Instrumentation typed engineering workbench (called as LabVIEW) software packet, is utilized as a graphical user interface program. Motor input power is measured by measuring the input voltage, current and power factor with the help of hall effect typed voltage and current transformers. Also, the output power is measured by measuring the speed and torque with the help of an encoder and torque sensor. All outputs of the voltage and current transformer, encoder and temperature, torque sensors are given to the Data Acquisition Card (DAQ) which acquires the data for processing and then the equivalent circuit parameters, efficiency, performance and loading characteristics are found out, using LabVIEW based user interface. It is suggested to use this test rig for the quality control of produced motors in industry, and an educational experiment setup in the school laboratories.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

A Transformer-Based Emotion Classification Model Using Transfer Learning and SHAP Analysis (전이 학습 및 SHAP 분석을 활용한 트랜스포머 기반 감정 분류 모델)

  • Subeen Leem;Byeongcheon Lee;Insu Jeon;Jihoon Moon
    • Annual Conference of KIPS
    • /
    • 2023.05a
    • /
    • pp.706-708
    • /
    • 2023
  • In this study, we embark on a journey to uncover the essence of emotions by exploring the depths of transfer learning on three pre-trained transformer models. Our quest to classify five emotions culminates in discovering the KLUE (Korean Language Understanding Evaluation)-BERT (Bidirectional Encoder Representations from Transformers) model, which is the most exceptional among its peers. Our analysis of F1 scores attests to its superior learning and generalization abilities on the experimental data. To delve deeper into the mystery behind its success, we employ the powerful SHAP (Shapley Additive Explanations) method to unravel the intricacies of the KLUE-BERT model. The findings of our investigation are presented with a mesmerizing text plot visualization, which serves as a window into the model's soul. This approach enables us to grasp the impact of individual tokens on emotion classification and provides irrefutable, visually appealing evidence to support the predictions of the KLUE-BERT model.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Style-Based Transformer for Time Series Forecasting (시계열 예측을 위한 스타일 기반 트랜스포머)

  • Kim, Dong-Keon;Kim, Kwangsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.579-586
    • /
    • 2021
  • Time series forecasting refers to predicting future time information based on past time information. Accurately predicting future information is crucial because it is used for establishing strategies or making policy decisions in various fields. Recently, a transformer model has been mainly studied for a time series prediction model. However, the existing transformer model has a limitation in that it has an auto-regressive structure in which the output result is input again when the prediction sequence is output. This limitation causes a problem in that accuracy is lowered when predicting a distant time point. This paper proposes a sequential decoding model focusing on the style transformation technique to handle these problems and make more precise time series forecasting. The proposed model has a structure in which the contents of past data are extracted from the transformer-encoder and reflected in the style-based decoder to generate the predictive sequence. Unlike the decoder structure of the conventional auto-regressive transformer, this structure has the advantage of being able to more accurately predict information from a distant view because the prediction sequence is output all at once. As a result of conducting a prediction experiment with various time series datasets with different data characteristics, it was shown that the model presented in this paper has better prediction accuracy than other existing time series prediction models.

Research on Pairwise Attention Reinforcement Model Using Feature Matching (특징 매칭을 이용한 페어와이즈 어텐션 강화 모델에 대한 연구)

  • Joon-Shik Lim;Yeong-Seok Ju
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.390-396
    • /
    • 2024
  • Vision Transformer (ViT) learns relationships between patches, but it may overlook important features such as color, texture, and boundaries, which can result in performance limitations in fields like medical imaging or facial recognition. To address this issue, this study proposes the Pairwise Attention Reinforcement (PAR) model. The PAR model takes both the training image and a reference image as input into the encoder, calculates the similarity between the two images, and matches the attention score maps of images with high similarity, reinforcing the matching areas of the training image. This process emphasizes important features between images and allows even subtle differences to be distinguished. In experiments using clock-drawing test data, the PAR model achieved a Precision of 0.9516, Recall of 0.8883, F1-Score of 0.9166, and an Accuracy of 92.93%. The proposed model showed a 12% performance improvement compared to API-Net, which uses the pairwise attention approach, and demonstrated a 2% performance improvement over the ViT model.