• 제목/요약/키워드: Pretraining-based method

검색결과 7건 처리시간 0.022초

Reinforcement learning-based control with application to the once-through steam generator system

  • Cheng Li;Ren Yu;Wenmin Yu;Tianshu Wang
    • Nuclear Engineering and Technology
    • /
    • 제55권10호
    • /
    • pp.3515-3524
    • /
    • 2023
  • A reinforcement learning framework is proposed for the control problem of outlet steam pressure of the once-through steam generator(OTSG) in this paper. The double-layer controller using Proximal Policy Optimization(PPO) algorithm is applied in the control structure of the OTSG. The PPO algorithm can train the neural networks continuously according to the process of interaction with the environment and then the trained controller can realize better control for the OTSG. Meanwhile, reinforcement learning has the characteristic of difficult application in real-world objects, this paper proposes an innovative pretraining method to solve this problem. The difficulty in the application of reinforcement learning lies in training. The optimal strategy of each step is summed up through trial and error, and the training cost is very high. In this paper, the LSTM model is adopted as the training environment for pretraining, which saves training time and improves efficiency. The experimental results show that this method can realize the self-adjustment of control parameters under various working conditions, and the control effect has the advantages of small overshoot, fast stabilization speed, and strong adaptive ability.

PC-SAN: Pretraining-Based Contextual Self-Attention Model for Topic Essay Generation

  • Lin, Fuqiang;Ma, Xingkong;Chen, Yaofeng;Zhou, Jiajun;Liu, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3168-3186
    • /
    • 2020
  • Automatic topic essay generation (TEG) is a controllable text generation task that aims to generate informative, diverse, and topic-consistent essays based on multiple topics. To make the generated essays of high quality, a reasonable method should consider both diversity and topic-consistency. Another essential issue is the intrinsic link of the topics, which contributes to making the essays closely surround the semantics of provided topics. However, it remains challenging for TEG to fill the semantic gap between source topic words and target output, and a more powerful model is needed to capture the semantics of given topics. To this end, we propose a pretraining-based contextual self-attention (PC-SAN) model that is built upon the seq2seq framework. For the encoder of our model, we employ a dynamic weight sum of layers from BERT to fully utilize the semantics of topics, which is of great help to fill the gap and improve the quality of the generated essays. In the decoding phase, we also transform the target-side contextual history information into the query layers to alleviate the lack of context in typical self-attention networks (SANs). Experimental results on large-scale paragraph-level Chinese corpora verify that our model is capable of generating diverse, topic-consistent text and essentially makes improvements as compare to strong baselines. Furthermore, extensive analysis validates the effectiveness of contextual embeddings from BERT and contextual history information in SANs.

Layer-wise hint-based training for knowledge transfer in a teacher-student framework

  • Bae, Ji-Hoon;Yim, Junho;Kim, Nae-Soo;Pyo, Cheol-Sig;Kim, Junmo
    • ETRI Journal
    • /
    • 제41권2호
    • /
    • pp.242-253
    • /
    • 2019
  • We devise a layer-wise hint training method to improve the existing hint-based knowledge distillation (KD) training approach, which is employed for knowledge transfer in a teacher-student framework using a residual network (ResNet). To achieve this objective, the proposed method first iteratively trains the student ResNet and incrementally employs hint-based information extracted from the pretrained teacher ResNet containing several hint and guided layers. Next, typical softening factor-based KD training is performed using the previously estimated hint-based information. We compare the recognition accuracy of the proposed approach with that of KD training without hints, hint-based KD training, and ResNet-based layer-wise pretraining using reliable datasets, including CIFAR-10, CIFAR-100, and MNIST. When using the selected multiple hint-based information items and their layer-wise transfer in the proposed method, the trained student ResNet more accurately reflects the pretrained teacher ResNet's rich information than the baseline training methods, for all the benchmark datasets we consider in this study.

딥러닝을 이용한 돼지 얼굴 인식 (Pig Face Recognition Using Deep Learning)

  • 마리한;김상철
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.493-494
    • /
    • 2022
  • The development of livestock faces intensive farming results in a rising need for recognition of individual animals such as cows and pigs is related to high traceability. In this paper, we present a non-invasive biometrics systematic approach based on the deep-learning classification model to pig face identification. Firstly, in our systematic method, we build a ROS data collection system block to collect 10 pig face data images. Secondly, we proposed a preprocessing block in that we utilize the SSIM method to filter some images of collected images that have high similarity. Thirdly, we employ the improved image classification model of CNN (ViT), which uses the finetuning and pretraining technique to recognize the individual pig face. Finally, our proposed method achieves the accuracy about 98.66%.

유역정보 기반 Transformer및 LSTM을 활용한 다목적댐 일 단위 유입량 예측 (Prediction of multipurpose dam inflow utilizing catchment attributes with LSTM and transformer models)

  • 김형주;송영훈;정은성
    • 한국수자원학회논문집
    • /
    • 제57권7호
    • /
    • pp.437-449
    • /
    • 2024
  • 딥러닝을 활용하여 유역 특성을 반영한 유량 예측 및 비교 연구가 주목받고 있다. 본 연구는 셀프 어텐션 메커니즘을 통해 대용량 데이터 훈련에 적합한 Transformer와 인코더-디코더(Encoder-Decoder) 구조를 가지는 LSTM-based multi-state-vector sequence-to-sequence (LSTM-MSV-S2S) 모형을 선정하여 유역정보(catchment attributes)를 고려할 수 있는 모형을 구축하였고 이를 토대로 국내 10개 다목적댐 유역의 유입량을 예측하였다. 본 연구에서 설계한 실험 구성은 단일유역-단일훈련(Single-basin Training, ST), 다수유역-단일훈련(Pretraining, PT), 사전학습-파인튜닝(Pretraining-Finetuning, PT-FT)의 세 가지 훈련 방법을 사용하였다. 모형의 입력 자료는 선정된 10가지 유역정보와 함께 기상 자료를 사용하였으며, 훈련 방법에 따른 유입량 예측 성능을 비교하였다. 그 결과, Transformer 모형은 PT와 PT-FT 방법에서 LSTM-MSV-S2S보다 우수한 성능을 보였으며, 특히 PT-FT 기법 적용 시 가장 높은 성능을 나타냈다. LSTM-MSV-S2S는 ST 방법에서는 Transformer보다 높은 성능을 보였으나, PT 및 PT-FT 방법에서는 낮은 성능을 보였다. 또한, 임베딩 레이어 활성화 값과 원본 유역정보를 군집화하여 모형의 유역 간 유사성 학습 여부를 분석하였다. Transformer는 활성화 벡터가 유사한 유역들에서 성능이 향상되었으며, 이는 사전에 학습된 다른 유역의 정보를 활용해 성능이 개선됨을 입증하였다. 본 연구는 다목적댐별 적합한 모형 및 훈련 방법을 비교하고, 국내 유역에 PT 및 PT-FT 방법을 적용한 딥러닝 모형 구축의 필요성을 제시하였다. 또한, PT 및 PT-FT 방법 적용 시 Transformer가 LSTM-MSV-S2S보다 성능이 더 우수하였다.

중국어 텍스트 분류 작업의 개선을 위한 WWMBERT 기반 방식 (A WWMBERT-based Method for Improving Chinese Text Classification Task)

  • 왕흠원;조인휘
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 춘계학술발표대회
    • /
    • pp.408-410
    • /
    • 2021
  • In the NLP field, the pre-training model BERT launched by the Google team in 2018 has shown amazing results in various tasks in the NLP field. Subsequently, many variant models have been derived based on the original BERT, such as RoBERTa, ERNIEBERT and so on. In this paper, the WWMBERT (Whole Word Masking BERT) model suitable for Chinese text tasks was used as the baseline model of our experiment. The experiment is mainly for "Text-level Chinese text classification tasks" are improved, which mainly combines Tapt (Task-Adaptive Pretraining) and "Multi-Sample Dropout method" to improve the model, and compare the experimental results, experimental data sets and model scoring standards Both are consistent with the official WWMBERT model using Accuracy as the scoring standard. The official WWMBERT model uses the maximum and average values of multiple experimental results as the experimental scores. The development set was 97.70% (97.50%) on the "text-level Chinese text classification task". and 97.70% (97.50%) of the test set. After comparing the results of the experiments in this paper, the development set increased by 0.35% (0.5%) and the test set increased by 0.31% (0.48%). The original baseline model has been significantly improved.

Malware Detection Using Deep Recurrent Neural Networks with no Random Initialization

  • Amir Namavar Jahromi;Sattar Hashemi
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.177-189
    • /
    • 2023
  • Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.