• Title/Summary/Keyword: fine-tuning

Search Result 316, Processing Time 0.027 seconds

A Small-Area Solenoid Inductor Based Digitally Controlled Oscillator

  • Park, Hyung-Gu;Kim, SoYoung;Lee, Kang-Yoon
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.3
    • /
    • pp.198-206
    • /
    • 2013
  • This paper presents a wide band, fine-resolution digitally controlled oscillator (DCO) with an on-chip 3-D solenoid inductor using the 0.13 ${\mu}m$ digital CMOS process. The on-chip solenoid inductor is vertically constructed by using Metal and Via layers with a horizontal scalability. Compared to a spiral inductor, it has the advantage of occupying a small area and this is due to its 3-D structure. To control the frequency of the DCO, active capacitor and active inductor are tuned digitally. To cover the wide tuning range, a three-step coarse tuning scheme is used. In addition, the DCO gain needs to be calibrated digitally to compensate for gain variations. The DCO with solenoid inductor is fabricated in 0.13 ${\mu}m$ process and the die area of the solenoid inductor is 0.013 $mm^2$. The DCO tuning range is about 54 % at 4.1 GHz, and the power consumption is 6.6 mW from a 1.2 V supply voltage. An effective frequency resolution is 0.14 kHz. The measured phase noise of the DCO output at 5.195 GHz is -110.61 dBc/Hz at 1 MHz offset.

A study on the construction and the performance evaluation of Littman type tunable diode laser system (Littman형 파장가변 다이오드 레이저 시스템의 설계.제작 및 성능평가)

  • 조재헌;박준구;백운식
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.4
    • /
    • pp.257-262
    • /
    • 2001
  • A Littman type tunable external-cavity diode laser system was developed. The laser output which is the Oth-order diffracted beam from a diffraction grating in an external cavity is a single longitudinal mode. Its FWHM was measured as less than 9 MHz. With the diode driving current of 140 mA and operating temperature of $25^{\circ}C$, the coarse tuning range of 3.475 urn was measured. A fine tuning experiment in which an external mirror was rotated by a PZT driven by a sawtooth wave was performed, and its tuning range of 0.042 urn was measured. sured.

  • PDF

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

Low-Power Wide-Tuning Range Differential LC-tuned VCO Design in Standard CMOS

  • Kim, Jong-Min;Woong Jung
    • Proceedings of the Korea Electromagnetic Engineering Society Conference
    • /
    • 2002.11a
    • /
    • pp.21-24
    • /
    • 2002
  • This paper presents a fully integrated, wide tuning range differential CMOS voltage-controlled oscillator, tuned by pMOS-varactors. VCO utilizing a novel tuning scheme is reported. Both coarse digital tuning and fine analog tuning are achieved using pMOS-varactors. The VCO were implemented in a 0.18-fm standard CMOS process. The VCO tuned from 1.8㎓ to 2.55㎓ through 2-bit digital and analog input. At 1.8V power supply voltage and a total power dissipation of 8mW, the VCO features a phase noise of -126㏈c/㎐ at 3㎒ frequency offset.

  • PDF

VCO Design using NAND Gate for Low Power Application

  • Kumar, Manoj
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.5
    • /
    • pp.650-656
    • /
    • 2016
  • Voltage controlled oscillator (VCO) is widely used circuit component in high-performance microprocessors and modern communication systems as a frequency source. In present work, VCO designs using the different combination of NAND gates with three transistors and CMOS inverter are reported. Three, five and seven stages ring VCO circuits are designed. Coarse and fine tuning have been done using two different supply sources. The frequency with coarse tuning varies from 3.31 GHz to 5.60 GHz in three stages, 1.77 GHz to 3.26 GHz in five stages and 1.27 GHz to 2.32 GHz in seven stages VCO respectively. Moreover, for fine tuning frequency varies from 3.70 GHz to 3.94 GHz in three stages, 2.04 GHz to 2.18 GHz in five stages and 1.43 GHz to 1.58 GHz in seven stages VCO respectively. Results of power consumption and phase noise for the VCO circuits are also been reported. Results of proposed VCO circuits have been compared with previously reported circuits and present circuit approach show significant improvement.

A Multi-task Self-attention Model Using Pre-trained Language Models on Universal Dependency Annotations

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.39-46
    • /
    • 2022
  • In this paper, we propose a multi-task model that can simultaneously predict general-purpose tasks such as part-of-speech tagging, lemmatization, and dependency parsing using the UD Korean Kaist v2.3 corpus. The proposed model thus applies the self-attention technique of the BERT model and the graph-based Biaffine attention technique by fine-tuning the multilingual BERT and the two Korean-specific BERTs such as KR-BERT and KoBERT. The performances of the proposed model are compared and analyzed using the multilingual version of BERT and the two Korean-specific BERT language models.

Genetic Algorithm with the Local Fine-Tuning Mechanism (유전자 알고리즘을 위한 지역적 미세 조정 메카니즘)

  • 임영희
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.2
    • /
    • pp.181-200
    • /
    • 1994
  • In the learning phase of multilyer feedforword neural network,there are problems such that local minimum,learning praralysis and slow learning speed when backpropagation algorithm used.To overcome these problems, the genetic algorithm has been used as learing method in the multilayer feedforword neural network instead of backpropagation algorithm.However,because the genetic algorith, does not have any mechanism for fine-tuned local search used in backpropagation method,it takes more time that the genetic algorithm converges to a global optimal solution.In this paper,we suggest a new GA-BP method which provides a fine-tunes local search to the genetic algorithm.GA-BP method uses gradient descent method as one of genetic algorithm's operators such as mutation or crossover.To show the effciency of the developed method,we applied it to the 3-parity bit problem with analysis.

Fuzzy neural network modeling using hyper elliptic gaussian membership functions (초타원 가우시안 소속함수를 사용한 퍼지신경망 모델링)

  • 권오국;주영훈;박진배
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.442-445
    • /
    • 1997
  • We present a hybrid self-tuning method of fuzzy inference systems with hyper elliptic Gaussian membership functions using genetic algorithm(GA) and back-propagation algorithm. The proposed self-tuning method has two phases : one is the coarse tuning process based on GA and the other is the fine tuning process based on back-propagation. But the parameters which is obtained by a GA are near optimal solutions. In order to solve the problem in GA applications, it uses a back-propagation algorithm, which is one of learning algorithms in neural networks, to finely tune the parameters obtained by a GA. We provide Box-Jenkins time series to evaluate the advantage and effectiveness of the proposed approach and compare with the conventional method.

  • PDF

On Designing A Fuzzy-Neural Network Control System Combined with Genetic Algorithm (유전알고리듬을 결합한 퍼지-신경망 제어 시스템 설계)

  • 김용호;김성현;전홍태;이홍기
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.8
    • /
    • pp.1119-1126
    • /
    • 1995
  • The construction of rule-base for a nonlinear time-varying system, becomes much more complicated because of model uncertainty and parameter variations. Furthemore, FLC does not have an ability of adjusting rule- base in responding to some sudden changes of control environments. To cope with these problems, an auto-tuning method of the fuzzy rule-base is required. In this paper, the GA-based Fuzzy-Neural control system combining Fuzzy-Neural control theory with the genetic algorithm(GA), which is known to be very effective in the optimization problem, will be proposed. The tuning of the proposed system is performed by two tuning processes(the course tuning process and the fine tuning/adaptive learning process). The effectiveness of the proposed control system will be demonstrated by computer simulations using a two degree of freedom robot manipulator.

  • PDF

Instruction Tuning for Controlled Text Generation in Korean Language Model (Instruction Tuning을 통한 한국어 언어 모델 문장 생성 제어)

  • Jinhee Jang;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.289-294
    • /
    • 2023
  • 대형 언어 모델(Large Language Model)은 방대한 데이터와 파라미터를 기반으로 문맥 이해에서 높은 성능을 달성하였지만, Human Alignment를 위한 문장 생성 제어 연구는 아직 활발한 도전 과제로 남아있다. 본 논문에서는 Instruction Tuning을 통한 문장 생성 제어 실험을 진행한다. 자연어 처리 도구를 사용하여 단일 혹은 다중 제약 조건을 포함하는 Instruction 데이터 셋을 자동으로 구축하고 한국어 언어 모델인 Polyglot-Ko 모델에 fine-tuning 하여 모델 생성이 제약 조건을 만족하는지 검증하였다. 실험 결과 4개의 제약 조건에 대해 평균 0.88의 accuracy를 보이며 효과적인 문장 생성 제어가 가능함을 확인하였다.

  • PDF