• Title/Summary/Keyword: Model Tuning

Search Result 755, Processing Time 0.027 seconds

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

A Study on Web Service Performance Enhancement Using Tuning Model (튜닝 모델을 이용한 웹 서비스 성능 향상에 관한 연구)

  • Oh, Kie-Sung
    • Journal of Information Technology Services
    • /
    • v.4 no.2
    • /
    • pp.125-133
    • /
    • 2005
  • Because of paradigm change to web service, numerous institutes have been suggested supporting solution about web service, and actively developed system using web service but it is hard to find out a systematic study for web service performance enhancement. Generally, there are SOAP message processing improvement and configuration optimization of server viewpoint for web service performance enhancement. Web service performance enhancement through SOAP message processing improvement have been studied related research but configuration optimization of server is hard to find out a systematic tuning model and performance criteria. In this paper, I suggested performance testing based tuning model and criteria of configuration optimization of server viewpoint. We executed practical analysis using tuning model about web service in internet. This paper show that the proposed tuning model and performance criteria is applicable to web service performance enhancement.

Automatic Tuning of Multi-Loop PID Controller (다중루프 PID 제어기의 자동 동조)

  • ;Zeungnam Bien
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.5
    • /
    • pp.478-484
    • /
    • 1990
  • An automatic tuning method of a PID controller which is used for single input single output processes is proposed. In the proposed tuning method, the frequency response data model is adopted along with the performance index which is an integral of time weighted square error between reference model and process frequency response data model for tuning. This method is easier to retune when either the process dynamics is changed or the reference model is changed. Finally, an example is provided to show the usefulness of the method.

  • PDF

Tuning of Dual-input PSS and Its Application to 612 MVA Thermal Plant: Part 1-Tuning Methology of IEEE Type PSS2A Model (다중입력 PSS 튜닝 방법과 612 MVA 화력기 적용: Part 1-IEEE PSS2A 튜닝 방법)

  • Kim, Dong-Joon;Moon, Young-Hwan;Kim, Sung-Min;Kim, Jin-Yi;Hwang, Bong-Hwan;Cho, Jong-Man
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.4
    • /
    • pp.655-664
    • /
    • 2009
  • This paper, Part 1, describes the effective dual-input PSS parameter design procedure for the IEEE Type PSS2A against the Dangjin 612 MVA thermal plant's EX2000 excitation system. The suggested tuning technique used the model-based PSS tuning method and consisted of three steps: 1) generation system modeling; 2) determination of PSS2A model parameters using linear, time-domain transient and 3-phase simultaneous analyses, and 3) field testing and verification, which are described in Part 2. The effective PSS2A model parameters of EX2000 system in the Dangjin T/P #4 were designed according to the suggested procedure, and verified by using three analyses.

Auto-Tuning Of Reference Model Based PID Controller Using Immune Algorithm

  • Kim, Dong-Hwa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.102.5-102
    • /
    • 2002
  • In this paper auto-tuning scheme of PID controller based on the reference model has been studied by immune algorithm for a process. Up to this time, many sophisticated tuning algorithms have been tried in order to improve the PID controller performance under such difficult conditions. However, in the actual plant, they are manually tuned through a trial and error procedure, and the derivative action is switched off. Therefore, it is difficult to tune. Simulation results by immune based tuning reveal that tuning approaches suggested in this paper is an effective approach to search for optimal or near optimal process control.

  • PDF

Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer (Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망)

  • Kwang-Yeob Lee;Ji-Won Lee;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.313-318
    • /
    • 2023
  • This paper proposes a neural network applying fine-tuning as a way to improve the performance of Video Classification based on Vision Transformer. Recently, the need for real-time video image analysis based on deep learning has emerged. Due to the characteristics of the existing CNN model used in Image Classification, it is difficult to analyze the association of consecutive frames. We want to find and solve the optimal model by comparing and analyzing the Vision Transformer and Non-local neural network models with the Attention mechanism. In addition, we propose an optimal fine-tuning neural network model by applying various methods of fine-tuning as a transfer learning method. The experiment trained the model with the UCF101 dataset and then verified the performance of the model by applying a transfer learning method to the UTA-RLDD dataset.

A Study on the Thermal Behaviors of Disk Brake and Pad by Friction Heat (디스크 브레이크와 패드의 마찰열에 의한 열적거동에 관한 연구)

  • Han, Seung-chul
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.12
    • /
    • pp.287-292
    • /
    • 2019
  • This paper analyzes the thermal behaviors of genuine discs used in automobiles and discs coming out of tuning products through FEM analysis. Modeling with genuine disk modeling and tuning disks Model-1, Model-2, Model-3 and analyzing the disk rotation speed was set to 1000rpm. When the brake is operated, the thermal behavior of the disk surface, such as the operating temperature caused by the disk and pad contact, the friction surface temperature after the disk stop, and the thermal deformation, were analyzed. When the brake was activated (0-4.5 seconds), the tuning disk showed 34℃ higher than the original disk, and after the disk stopped (40.5 seconds), the tuning disk was analyzed 18℃ lowe, deformation due to the disk heat was deformed by 0.3mm for the tuning disk. Although there is an effect to reduce the fading phenomenon due to the thermal behavior of the pure disk and the tuning disk, it can be observed that there is no significant change in the thermal behavior due to the hole processing and the disk surface processing of the tuning disk.

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

Tuning the Architecture of Neural Networks for Multi-Class Classification (다집단 분류 인공신경망 모형의 아키텍쳐 튜닝)

  • Jeong, Chulwoo;Min, Jae H.
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.1
    • /
    • pp.139-152
    • /
    • 2013
  • The purpose of this study is to claim the validity of tuning the architecture of neural network models for multi-class classification. A neural network model for multi-class classification is basically constructed by building a series of neural network models for binary classification. Building a neural network model, we are required to set the values of parameters such as number of hidden nodes and weight decay parameter in advance, which draws special attention as the performance of the model can be quite different by the values of the parameters. For better performance of the model, it is absolutely necessary to have a prior process of tuning the parameters every time the neural network model is built. Nonetheless, previous studies have not mentioned the necessity of the tuning process or proved its validity. In this study, we claim that we should tune the parameters every time we build the neural network model for multi-class classification. Through empirical analysis using wine data, we show that the performance of the model with the tuned parameters is superior to those of untuned models.

Self-Organizing Fuzzy Modeling Based on Hyperplane-Shaped Clusters (다차원 평면 클러스터를 이용한 자기 구성 퍼지 모델링)

  • Koh, Taek-Beom
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.985-992
    • /
    • 2001
  • This paper proposes a self-organizing fuzzy modeling(SOFUM)which an create a new hyperplane shaped cluster and adjust parameters of the fuzzy model in repetition. The suggested algorithm SOFUM is composed of four steps: coarse tuning. fine tuning cluster creation and optimization of learning rates. In the coarse tuning fuzzy C-regression model(FCRM) clustering and weighted recursive least squared (WRLS) algorithm are used and in the fine tuning gradient descent algorithm is used to adjust parameters of the fuzzy model precisely. In the cluster creation, a new hyperplane shaped cluster is created by applying multiple regression to input/output data with relatively large fuzzy entropy based on parameter tunings of fuzzy model. And learning rates are optimized by utilizing meiosis-genetic algorithm in the optimization of learning rates To check the effectiveness of the suggested algorithm two examples are examined and the performance of the identified fuzzy model is demonstrated via computer simulation.

  • PDF