• Title/Summary/Keyword: Fine Tuning

Search Result 333, Processing Time 0.022 seconds

Deep compression of convolutional neural networks with low-rank approximation

  • Astrid, Marcella;Lee, Seung-Ik
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.421-434
    • /
    • 2018
  • The application of deep neural networks (DNNs) to connect the world with cyber physical systems (CPSs) has attracted much attention. However, DNNs require a large amount of memory and computational cost, which hinders their use in the relatively low-end smart devices that are widely used in CPSs. In this paper, we aim to determine whether DNNs can be efficiently deployed and operated in low-end smart devices. To do this, we develop a method to reduce the memory requirement of DNNs and increase the inference speed, while maintaining the performance (for example, accuracy) close to the original level. The parameters of DNNs are decomposed using a hybrid of canonical polyadic-singular value decomposition, approximated using a tensor power method, and fine-tuned by performing iterative one-shot hybrid fine-tuning to recover from a decreased accuracy. In this study, we evaluate our method on frequently used networks. We also present results from extensive experiments on the effects of several fine-tuning methods, the importance of iterative fine-tuning, and decomposition techniques. We demonstrate the effectiveness of the proposed method by deploying compressed networks in smartphones.

A 1 GHz Tuning range VCO with a Sigma-Delta Modulator for UWB Frequency Synthesizer (UWB 주파수 합성기용 1 GHz 광 대역 시그마 델타 성긴 튜닝형 전압 제어 발진기)

  • Nam, Chul;Park, An-Su;Park, Joon-Sung;Pu, Young-Gun;Hur, Jeong;Lee, Kang-Yoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.8
    • /
    • pp.64-72
    • /
    • 2010
  • This paper presents a wide range VCO with fine coarse tuning step using a sigma-delta modulation technique for UWB frequency synthesizer. The proposed coarse tuning scheme provides the low effective frequency resolution without any degradation of phase noise performance. With three steps coarse tuning, the VCO has wide tuning range and fine tuning step simultaneously. The frequency synthesizer with VCO was implemented with 0.13 ${\mu}m$ CMOS technology. The tuning range of the VCO is 5.8 GHz~6.8 GHz with the effective frequency resolution of 3.9 kHz. It achieves the measured phase noise of -108 dBc/Hz at 1 MHz offset and a tuning range 16.8 % with 5.9 mW power. The figure-of-merit with the tuning range is -181.5 dBc/Hz.

A Small-Area Solenoid Inductor Based Digitally Controlled Oscillator

  • Park, Hyung-Gu;Kim, SoYoung;Lee, Kang-Yoon
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.3
    • /
    • pp.198-206
    • /
    • 2013
  • This paper presents a wide band, fine-resolution digitally controlled oscillator (DCO) with an on-chip 3-D solenoid inductor using the 0.13 ${\mu}m$ digital CMOS process. The on-chip solenoid inductor is vertically constructed by using Metal and Via layers with a horizontal scalability. Compared to a spiral inductor, it has the advantage of occupying a small area and this is due to its 3-D structure. To control the frequency of the DCO, active capacitor and active inductor are tuned digitally. To cover the wide tuning range, a three-step coarse tuning scheme is used. In addition, the DCO gain needs to be calibrated digitally to compensate for gain variations. The DCO with solenoid inductor is fabricated in 0.13 ${\mu}m$ process and the die area of the solenoid inductor is 0.013 $mm^2$. The DCO tuning range is about 54 % at 4.1 GHz, and the power consumption is 6.6 mW from a 1.2 V supply voltage. An effective frequency resolution is 0.14 kHz. The measured phase noise of the DCO output at 5.195 GHz is -110.61 dBc/Hz at 1 MHz offset.

A study on the construction and the performance evaluation of Littman type tunable diode laser system (Littman형 파장가변 다이오드 레이저 시스템의 설계.제작 및 성능평가)

  • 조재헌;박준구;백운식
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.4
    • /
    • pp.257-262
    • /
    • 2001
  • A Littman type tunable external-cavity diode laser system was developed. The laser output which is the Oth-order diffracted beam from a diffraction grating in an external cavity is a single longitudinal mode. Its FWHM was measured as less than 9 MHz. With the diode driving current of 140 mA and operating temperature of $25^{\circ}C$, the coarse tuning range of 3.475 urn was measured. A fine tuning experiment in which an external mirror was rotated by a PZT driven by a sawtooth wave was performed, and its tuning range of 0.042 urn was measured. sured.

  • PDF

Enhancing LoRA Fine-tuning Performance Using Curriculum Learning

  • Daegeon Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.43-54
    • /
    • 2024
  • Recently, there has been a lot of research on utilizing Language Models, and Large Language Models have achieved innovative results in various tasks. However, the practical application faces limitations due to the constrained resources and costs required to utilize Large Language Models. Consequently, there has been recent attention towards methods to effectively utilize models within given resources. Curriculum Learning, a methodology that categorizes training data according to difficulty and learns sequentially, has been attracting attention, but it has the limitation that the method of measuring difficulty is complex or not universal. Therefore, in this study, we propose a methodology based on data heterogeneity-based Curriculum Learning that measures the difficulty of data using reliable prior information and facilitates easy utilization across various tasks. To evaluate the performance of the proposed methodology, experiments were conducted using 5,000 specialized documents in the field of information communication technology and 4,917 documents in the field of healthcare. The results confirm that the proposed methodology outperforms traditional fine-tuning in terms of classification accuracy in both LoRA fine-tuning and full fine-tuning.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

Low-Power Wide-Tuning Range Differential LC-tuned VCO Design in Standard CMOS

  • Kim, Jong-Min;Woong Jung
    • Proceedings of the Korea Electromagnetic Engineering Society Conference
    • /
    • 2002.11a
    • /
    • pp.21-24
    • /
    • 2002
  • This paper presents a fully integrated, wide tuning range differential CMOS voltage-controlled oscillator, tuned by pMOS-varactors. VCO utilizing a novel tuning scheme is reported. Both coarse digital tuning and fine analog tuning are achieved using pMOS-varactors. The VCO were implemented in a 0.18-fm standard CMOS process. The VCO tuned from 1.8㎓ to 2.55㎓ through 2-bit digital and analog input. At 1.8V power supply voltage and a total power dissipation of 8mW, the VCO features a phase noise of -126㏈c/㎐ at 3㎒ frequency offset.

  • PDF

VCO Design using NAND Gate for Low Power Application

  • Kumar, Manoj
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.5
    • /
    • pp.650-656
    • /
    • 2016
  • Voltage controlled oscillator (VCO) is widely used circuit component in high-performance microprocessors and modern communication systems as a frequency source. In present work, VCO designs using the different combination of NAND gates with three transistors and CMOS inverter are reported. Three, five and seven stages ring VCO circuits are designed. Coarse and fine tuning have been done using two different supply sources. The frequency with coarse tuning varies from 3.31 GHz to 5.60 GHz in three stages, 1.77 GHz to 3.26 GHz in five stages and 1.27 GHz to 2.32 GHz in seven stages VCO respectively. Moreover, for fine tuning frequency varies from 3.70 GHz to 3.94 GHz in three stages, 2.04 GHz to 2.18 GHz in five stages and 1.43 GHz to 1.58 GHz in seven stages VCO respectively. Results of power consumption and phase noise for the VCO circuits are also been reported. Results of proposed VCO circuits have been compared with previously reported circuits and present circuit approach show significant improvement.

A Multi-task Self-attention Model Using Pre-trained Language Models on Universal Dependency Annotations

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.39-46
    • /
    • 2022
  • In this paper, we propose a multi-task model that can simultaneously predict general-purpose tasks such as part-of-speech tagging, lemmatization, and dependency parsing using the UD Korean Kaist v2.3 corpus. The proposed model thus applies the self-attention technique of the BERT model and the graph-based Biaffine attention technique by fine-tuning the multilingual BERT and the two Korean-specific BERTs such as KR-BERT and KoBERT. The performances of the proposed model are compared and analyzed using the multilingual version of BERT and the two Korean-specific BERT language models.

Generating Sponsored Blog Texts through Fine-Tuning of Korean LLMs (한국어 언어모델 파인튜닝을 통한 협찬 블로그 텍스트 생성)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-12
    • /
    • 2024
  • In this paper, we fine-tuned KoAlpaca, a large-scale Korean language model, and implemented a blog text generation system utilizing it. Blogs on social media platforms are widely used as a marketing tool for businesses. We constructed training data of positive reviews through emotion analysis and refinement of collected sponsored blog texts and applied QLoRA for the lightweight training of KoAlpaca. QLoRA is a fine-tuning approach that significantly reduces the memory usage required for training, with experiments in an environment with a parameter size of 12.8B showing up to a 58.8% decrease in memory usage compared to LoRA. To evaluate the generative performance of the fine-tuned model, texts generated from 100 inputs not included in the training data produced on average more than twice the number of words compared to the pre-trained model, with texts of positive sentiment also appearing more than twice as often. In a survey conducted for qualitative evaluation of generative performance, responses indicated that the fine-tuned model's generated outputs were more relevant to the given topics on average 77.5% of the time. This demonstrates that the positive review generation language model for sponsored content in this paper can enhance the efficiency of time management for content creation and ensure consistent marketing effects. However, to reduce the generation of content that deviates from the category of positive reviews due to elements of the pre-trained model, we plan to proceed with fine-tuning using the augmentation of training data.