• Title/Summary/Keyword: ELECTRA

Search Result 60, Processing Time 0.025 seconds

Evaluation on Long-Term Reliability of HVDC Submarine Cable (HVDC 해저케이블 장기과통전 신뢰성 시험평가 방법)

  • Yang, B.M.;Park, J.W.;Moon, K.H.;Noh, T.H.;Kim, Y.S.;Kang, J.W.
    • Proceedings of the KIEE Conference
    • /
    • 2009.07a
    • /
    • pp.377_378
    • /
    • 2009
  • HVDC 케이블은 장거리 전력전송, 국가간 계통연계, 비동기 전력계통연계, 전력의 시장화에 따른 전력공급 제어 필요에 따라 세계적으로 널리 사용되고 있다. 국내에서는 현재 유일하게 제주-해남간 180kV HVDC 해저케이블이 운영 중에 있으며, 향후 제주도의 안정적인 전력공급을 위하여 2011년에 제주-육지간 250kV HVDC 해저케이블이 추가로 건설될 예정이다. 그래서 HVDC 해저케이블의 신속 정확하게 안정적이고 신뢰성 있는 운영을 위해서 국내에서 자체 개발한 HVDC 해저케이블에 대한 장기과통전 시험에 대한 평가 및 절차가 필요하게 되었다. 본 논문에서는 HVDC 해저케이블의 장기과통전 시험평가를 위하여 전기적 시험은 CIGRE에서 권고하고 있는 Electra 189, 219에 근거하고 기계적 시험은 IEC 60055-1, Elecrta 171에 근거하여 기술하고자 한다.

  • PDF

Electra-Optic and Ionic Properties of Twisted Nematic Cells With Different Chiral Pitch

  • Kim, Sung-Woon;Park, Hee-Do;Kim, Hee-Cheol;Park, Young-Il;Suh, Dong-Hae;Lee, Won-Geon;Park, Hae-Sung
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2002.08a
    • /
    • pp.504-507
    • /
    • 2002
  • We investigated electro-optic and ionic properties of twisted nematic cells by using control of chiral pitch. These properties are observed in practical experiment and simulations. C-V and V-T curve characteristics were obtained from three types of cells with d/p. It is shown that d/p ratio of short cells exhibit faster response time improved by 20% than normal cell. Also, inter-gray response time is improved each rise time and decay time. And, the increase of saturation voltage is happened because of the small twist angel change from initial state at high voltage near 5V. To compensate for longer black level tail, gamma curve index was varied from g = 2.2 to g = 2.7 in module status. Additionally, adding chiral dopant into TN cells improved ionic characteristics such as increasing VHR, Ion density and DC Hysteresis were decreased..

  • PDF

Liquid Crystal Aligning Capabilities on the Photopolymer Based Maleimide (Maleimide계 폴리머를 이용한 액정배향특성)

  • Lee, Yun-Gun;Hwang, Jeoung-Yeon;Seo, Dae-Shik;Kim, Jun-Young;Lee, Jae-Ho;Kim, Tae-Ho
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.358-361
    • /
    • 2001
  • A new photoalignment material PMI5CA, poly{N-(phenyl)maleimide-co-3-[4-(pentyloxy) cinnamate]propyl-2-hydroxy-1-methacrylate}, was synthesized and the electra-optical (EO) characteristics in the vertical-aligned (VA) liquid crystal display (LCD) photo-aligned on the photopolymer surface were studied. Excellent voltage-transmittance(V-T) characteristics in the VA-LCD photoaligned with polarized UV exposure of oblique direction on the pohotopolymer surfaces for 1 min can be achieved. The transmittance of the VA-LCD photoaligned on the photopolymer surface decreased with increasing UV exposure time. We suggest that the decrease of transmittance in the VA-LCD photoaligned on the photopolymer surface is attributed to the dissociation of the ester linkage in the photodimerized cinnamate structure with increasing UV exposure time

  • PDF

Detecting and Interpreting Terms: Focusing Korean Medical Terms (전문용어 탐지와 해석 모델: 한국어 의학용어 중심으로 )

  • Haram-Yeom;Jae-Hoon Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.407-411
    • /
    • 2022
  • 최근 COVID-19로 인해 대중의 의학 분야 관심이 증가하고 있다. 대부분의 의학문서는 전문용어인 의학용어로 구성되어 있어 대중이 이를 보고 이해하기에 어려움이 있다. 의학용어를 쉬운 뜻으로 풀이하는 모델을 이용한다면 대중이 의학 문서를 쉽게 이해할 수 있을 것이다. 이런 문제를 완화하기 위해서 본 논문에서는 Transformer 기반 번역 모델을 이용한 의학용어 탐지 및 해석 모델을 제안한다. 번역 모델에 적용하기 위해 병렬말뭉치가 필요하다. 본 논문에서는 다음과 같은 방법으로 병렬말뭉치를 구축한다: 1) 의학용어 사전을 구축한다. 2) 의학 드라마의 자막으로부터 의학용어를 찾아서 그 뜻풀이로 대체한다. 3) 원자막과 뜻풀이가 포함된 자막을 나란히 배열한다. 구축된 병렬말뭉치를 이용해서 Transformer 번역모델에 적용하여 전문용어를 찾아서 해석하는 모델을 구축한다. 각 문장은 음절 단위로 나뉘어 사전학습 된 KoCharELECTRA를 이용해서 임베딩한다. 제안된 모델은 약 69.3%의 어절단위 BLEU 점수를 보였다. 제안된 의학용어 해석기를 통해 대중이 의학문서를 좀 더 쉽게 접근할 수 있을 것이다.

  • PDF

A Study on the Construction of Specialized NER Dataset for Personal Information Detection (개인정보 탐지를 위한 특화 개체명 주석 데이터셋 구축 및 분류 실험)

  • Hyerin Kang;Li Fei;Yejee kang;Seoyoon Park;Yeseul Cho;Hyeonmin Seong;Sungsoon Jang;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.185-191
    • /
    • 2022
  • 개인정보에 대한 경각심 및 중요성 증대에 따라 텍스트 내 개인정보를 탐지하는 태스크가 주목받고 있다. 본 연구에서는 개인정보 탐지 및 비식별화를 위한 개인정보 특화 개체명 태그셋 7개를 고안하는 한편 이를 바탕으로 비식별화된 원천 데이터에 가상의 데이터를 대치하고 개체명을 주석함으로써 개인정보 특화 개체명 데이터셋을 구축하였다. 개인정보 분류 실험에는 KR-ELECTRA를 사용하였으며, 실험 결과 일반 개체명 및 정규식 바탕의 규칙 기반 개인정보 탐지 성능과 비교하여 특화 개체명을 활용한 딥러닝 기반의 개인정보 탐지가 더 높은 성능을 보임을 확인하였다.

  • PDF

Reading Comprehension requiring Discrete Reasoning Over Paragraphs for Korean (단락에 대한 이산 추론을 요구하는 한국어 기계 독해)

  • Kim, Gyeong-min;Seo, Jaehyung;Lee, Soomin;Lim, Heui-seok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.439-443
    • /
    • 2021
  • 기계 독해는 단락과 질의가 주어졌을 때 단락 내 정답을 찾는 자연어 처리 태스크이다. 최근 벤치마킹 데이터셋에서 사전학습 언어모델을 기반으로 빠른 발전을 보이며 특정 데이터셋에서 인간의 성능을 뛰어넘는 성과를 거두고 있다. 그러나 이는 단락 내 범위(span)에서 추출된 정보에 관한 것으로, 실제 연산을 요구하는 질의에 대한 응답에는 한계가 있다. 본 논문에서는 기존 범위 내에서 응답이 가능할 뿐만이 아니라, 연산에 관한 이산 추론을 요구하는 단락 및 질의에 대해서도 응답이 가능한 기계 독해 모델의 효과성을 검증하고자 한다. 이를 위해 영어 DROP (Discrete Reasoning Over the content of Paragraphs, DROP) 데이터셋으로부터 1,794개의 질의응답 쌍을 Google Translator API v2를 사용하여 한국어로 번역 및 정제하여 KoDROP (Korean DROP, KoDROP) 데이터셋을 구축하였다. 단락 및 질의를 참조하여 연산을 수행하기 위한 의미 태그를 한국어 KoBERT 및 KoELECTRA에 접목하여, 숫자 인식이 가능한 KoNABERT, KoNAELECTRA 모델을 생성하였다. 실험 결과, KoDROP 데이터셋은 기존 기계 독해 데이터셋과 비교하여 단락에 대한 더욱 포괄적인 이해와 연산 정보를 요구하였으며, 가장 높은 성능을 기록한 KoNAELECTRA는 KoBERT과 비교하여 F1, EM에서 모두 19.20의 월등한 성능 향상을 보였다.

  • PDF

CORRECT? CORECT!: Classification of ESG Ratings with Earnings Call Transcript

  • Haein Lee;Hae Sun Jung;Heungju Park;Jang Hyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.1090-1100
    • /
    • 2024
  • While the incorporating ESG indicator is recognized as crucial for sustainability and increased firm value, inconsistent disclosure of ESG data and vague assessment standards have been key challenges. To address these issues, this study proposes an ambiguous text-based automated ESG rating strategy. Earnings Call Transcript data were classified as E, S, or G using the Refinitiv-Sustainable Leadership Monitor's over 450 metrics. The study employed advanced natural language processing techniques such as BERT, RoBERTa, ALBERT, FinBERT, and ELECTRA models to precisely classify ESG documents. In addition, the authors computed the average predicted probabilities for each label, providing a means to identify the relative significance of different ESG factors. The results of experiments demonstrated the capability of the proposed methodology in enhancing ESG assessment criteria established by various rating agencies and highlighted that companies primarily focus on governance factors. In other words, companies were making efforts to strengthen their governance framework. In conclusion, this framework enables sustainable and responsible business by providing insight into the ESG information contained in Earnings Call Transcript data.

Detects depression-related emotions in user input sentences (사용자 입력 문장에서 우울 관련 감정 탐지)

  • Oh, Jaedong;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1759-1768
    • /
    • 2022
  • This paper proposes a model to detect depression-related emotions in a user's speech using wellness dialogue scripts provided by AI Hub, topic-specific daily conversation datasets, and chatbot datasets published on Github. There are 18 emotions, including depression and lethargy, in depression-related emotions, and emotion classification tasks are performed using KoBERT and KOELECTRA models that show high performance in language models. For model-specific performance comparisons, we build diverse datasets and compare classification results while adjusting batch sizes and learning rates for models that perform well. Furthermore, a person performs a multi-classification task by selecting all labels whose output values are higher than a specific threshold as the correct answer, in order to reflect feeling multiple emotions at the same time. The model with the best performance derived through this process is called the Depression model, and the model is then used to classify depression-related emotions for user utterances.

Topic Modeling Insomnia Social Media Corpus using BERTopic and Building Automatic Deep Learning Classification Model (BERTopic을 활용한 불면증 소셜 데이터 토픽 모델링 및 불면증 경향 문헌 딥러닝 자동분류 모델 구축)

  • Ko, Young Soo;Lee, Soobin;Cha, Minjung;Kim, Seongdeok;Lee, Juhee;Han, Ji Yeong;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.2
    • /
    • pp.111-129
    • /
    • 2022
  • Insomnia is a chronic disease in modern society, with the number of new patients increasing by more than 20% in the last 5 years. Insomnia is a serious disease that requires diagnosis and treatment because the individual and social problems that occur when there is a lack of sleep are serious and the triggers of insomnia are complex. This study collected 5,699 data from 'insomnia', a community on 'Reddit', a social media that freely expresses opinions. Based on the International Classification of Sleep Disorders ICSD-3 standard and the guidelines with the help of experts, the insomnia corpus was constructed by tagging them as insomnia tendency documents and non-insomnia tendency documents. Five deep learning language models (BERT, RoBERTa, ALBERT, ELECTRA, XLNet) were trained using the constructed insomnia corpus as training data. As a result of performance evaluation, RoBERTa showed the highest performance with an accuracy of 81.33%. In order to in-depth analysis of insomnia social data, topic modeling was performed using the newly emerged BERTopic method by supplementing the weaknesses of LDA, which is widely used in the past. As a result of the analysis, 8 subject groups ('Negative emotions', 'Advice and help and gratitude', 'Insomnia-related diseases', 'Sleeping pills', 'Exercise and eating habits', 'Physical characteristics', 'Activity characteristics', 'Environmental characteristics') could be confirmed. Users expressed negative emotions and sought help and advice from the Reddit insomnia community. In addition, they mentioned diseases related to insomnia, shared discourse on the use of sleeping pills, and expressed interest in exercise and eating habits. As insomnia-related characteristics, we found physical characteristics such as breathing, pregnancy, and heart, active characteristics such as zombies, hypnic jerk, and groggy, and environmental characteristics such as sunlight, blankets, temperature, and naps.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.