• Title/Summary/Keyword: 자연어 처리 기법

Search Result 218, Processing Time 0.025 seconds

A Semi-automatic Construction method of a Named Entity Dictionary Based on Wikipedia (위키피디아 기반 개체명 사전 반자동 구축 방법)

  • Song, Yeongkil;Jeong, Seokwon;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1397-1403
    • /
    • 2015
  • A named entity(NE) dictionary is an important resource for the performance of NE recognition. However, it is not easy to construct a NE dictionary manually since human annotation is time consuming and labor-intensive. To save construction time and reduce human labor, we propose a semi-automatic system for the construction of a NE dictionary. The proposed system constructs a pseudo-document with Wiki-categories per NE class by using an active learning technique. Then, it calculates similarities between Wiki entries and pseudo-documents using the BM25 model, a well-known information retrieval model. Finally, it classifies each Wiki entry into NE classes based on similarities. In experiments with three different types of NE class sets, the proposed system showed high performance(macro-average F1-score of 0.9028 and micro-average F1-score 0.9554).

Text Mining Analysis on the Research Field of the Coastal and Ocean Engineering Based on the SCOPUS Bibliographic Information (해안해양공학 연구 분야의 SCOPUS 서지정보 Text Mining 분석)

  • Lee, Gi Seop;Cho, Hong Yeon;Han, Jae Rim
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.30 no.1
    • /
    • pp.19-28
    • /
    • 2018
  • Numerous research papers have been accumulated due to the development and computerization of bibliometrics. This made it difficult to review all of the related papers published worldwide to conduct the study. However, due to the development of Natural language processing techniques, the tendency analysis of published research papers has become easier. In this study, text mining analysis using the statistical computing language R was carried out based on the bibliographic information of SCOPUS DB (Data Base) in the field of coastal and ocean engineering. As expected, the term 'wave' predominates, and it was confirmed that numerical analysis and hydraulic experiments were still dominant from the terms 'numerical model', 'numerical simulation', and 'experimental study'. In addition, recent use of the term 'wave energy' related to marine energy has been recognized. On the other hand, it was quantitatively confirmed that the frequency of connection between 'wave', and 'height' or 'energy' prevailed, and suggested the possibility of high resolution analysis by detailed field and period in the future.

Verification of educational goal of reading area in Korean SAT through natural language processing techniques (대학수학능력시험 독서 영역의 교육 목표를 위한 자연어처리 기법을 통한 검증)

  • Lee, Soomin;Kim, Gyeongmin;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.81-88
    • /
    • 2022
  • The major educational goal of reading part, which occupies important portion in Korean language in Korean SAT, is to evaluated whether a given text can be fully understood. Therefore given questions in the exam must be able to solely solvable by given text. In this paper we developed a datatset based on Korean SAT's reading part in order to evaluate whether a deep learning language model can classify if the given question is true or false, which is a binary classification task in NLP. In result, by applying language model solely according to the passages in the dataset, we were able to acquire better performance than 59.2% in F1 score for human performance in most of language models, that KoELECTRA scored 62.49% in our experiment. Also we proved that structural limit of language models can be eased by adjusting data preprocess.

Graph-Based Word Sense Disambiguation Using Iterative Approach (반복적 기법을 사용한 그래프 기반 단어 모호성 해소)

  • Kang, Sangwoo
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.2
    • /
    • pp.102-110
    • /
    • 2017
  • Current word sense disambiguation techniques employ various machine learning-based methods. Various approaches have been proposed to address this problem, including the knowledge base approach. This approach defines the sense of an ambiguous word in accordance with knowledge base information with no training corpus. In unsupervised learning techniques that use a knowledge base approach, graph-based and similarity-based methods have been the main research areas. The graph-based method has the advantage of constructing a semantic graph that delineates all paths between different senses that an ambiguous word may have. However, unnecessary semantic paths may be introduced, thereby increasing the risk of errors. To solve this problem and construct a fine-grained graph, in this paper, we propose a model that iteratively constructs the graph while eliminating unnecessary nodes and edges, i.e., senses and semantic paths. The hybrid similarity estimation model was applied to estimate a more accurate sense in the constructed semantic graph. Because the proposed model uses BabelNet, a multilingual lexical knowledge base, the model is not limited to a specific language.

Comparison of Fault Diagnosis Accuracy Between XGBoost and Conv1D Using Long-Term Operation Data of Ship Fuel Supply Instruments (선박 연료 공급 기기류의 장시간 운전 데이터의 고장 진단에 있어서 XGBoost 및 Conv1D의 예측 정확성 비교)

  • Hyung-Jin Kim;Kwang-Sik Kim;Se-Yun Hwang;Jang-Hyun Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.110-110
    • /
    • 2022
  • 본 연구는 자율운항 선박의 원격 고장 진단 기법 개발의 일부로 수행되었다. 특히, 엔진 연료 계통 장비로부터 계측된 시계열 데이터로부터 상태 진단을 위한 알고리즘 구현 결과를 제시하였다. 엔진 연료 펌프와 청정기를 가진 육상 실험 장비로부터 진동 시계열 데이터 계측하였으며, 이상 감지, 고장 분류 및 고장 예측이 가능한 심층 학습(Deep Learning) 및 기계 학습(Machine Learning) 알고리즘을 구현하였다. 육상 실험 장비에 고장 유형 별로 인위적인 고장을 발생시켜 특징적인 진동 신호를 계측하여, 인공 지능 학습에 이용하였다. 계측된 신호 데이터는 선행 발생한 사건의 신호가 후행 사건에 영향을 미치는 특성을 가지고 있으므로, 시계열에 내포된 고장 상태는 시간 간의 선후 종속성을 반영할 수 있는 학습 알고리즘을 제시하였다. 고장 사건의 시간 종속성을 반영할 수 있도록 순환(Recurrent) 계열의 RNN(Recurrent Neural Networks), LSTM(Long Short-Term Memory models)의 모델과 합성곱 연산 (Convolution Neural Network)을 기반으로 하는 Conv1D 모델을 적용하여 예측 정확성을 비교하였다. 특히, 합성곱 계열의 RNN LSTM 모델이 고차원의 순차적 자연어 언어 처리에 장점을 보이는 모델임을 착안하여, 신호의 시간 종속성을 학습에 반영할 수 있는 합성곱 계열의 Conv1 알고리즘을 고장 예측에 사용하였다. 또한 기계 학습 모델의 효율성을 감안하여 XGBoost를 추가로 적용하여 고장 예측을 시도하였다. 최종적으로 연료 펌프와 청정기의 진동 신호로부터 Conv1D 모델과 XGBoost 모델의 고장 예측 성능 결과를 비교하였다

  • PDF

Media-based Analysis of Gasoline Inventory with Korean Text Summarization (한국어 문서 요약 기법을 활용한 휘발유 재고량에 대한 미디어 분석)

  • Sungyeon Yoon;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.509-515
    • /
    • 2023
  • Despite the continued development of alternative energies, fuel consumption is increasing. In particular, the price of gasoline fluctuates greatly according to fluctuations in international oil prices. Gas stations adjust their gasoline inventory to respond to gasoline price fluctuations. In this study, news datasets is used to analyze the gasoline consumption patterns through fluctuations of the gasoline inventory. First, collecting news datasets with web crawling. Second, summarizing news datasets using KoBART, which summarizes the Korean text datasets. Finally, preprocessing and deriving the fluctuations factors through N-Gram Language Model and TF-IDF. Through this study, it is possible to analyze and predict gasoline consumption patterns.

Development of a Fake News Detection Model Using Text Mining and Deep Learning Algorithms (텍스트 마이닝과 딥러닝 알고리즘을 이용한 가짜 뉴스 탐지 모델 개발)

  • Dong-Hoon Lim;Gunwoo Kim;Keunho Choi
    • Information Systems Review
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2021
  • Fake news isexpanded and reproduced rapidly regardless of their authenticity by the characteristics of modern society, called the information age. Assuming that 1% of all news are fake news, the amount of economic costs is reported to about 30 trillion Korean won. This shows that the fake news isvery important social and economic issue. Therefore, this study aims to develop an automated detection model to quickly and accurately verify the authenticity of the news. To this end, this study crawled the news data whose authenticity is verified, and developed fake news prediction models using word embedding (Word2Vec, Fasttext) and deep learning algorithms (LSTM, BiLSTM). Experimental results show that the prediction model using BiLSTM with Word2Vec achieved the best accuracy of 84%.

Study on the Implementation of SBOM(Software Bill Of Materials) in Operational Nuclear Facilities (가동 중 원자력시설의 SBOM(Software Bill Of Materials)구현방안 연구)

  • Do-yeon Kim;Seong-su Yoon;Ieck-chae Euom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.229-244
    • /
    • 2024
  • Recently, supply chain attacks against nuclear facilities such as "Evil PLC" are increasing due to the application of digital technology in nuclear power plants such as the APR1400 reactor. Nuclear supply chain security requires a asset management system that can systematically manage a large number of providers due to the nature of the industry. However, due to the nature of the control system, there is a problem of inconsistent management of attribute information due to the long lifecycle of software assets. In addition, due to the availability of the operational technology, the introduction of automated configuration management is insufficient, and limitations such as input errors exist. This study proposes a systematic asset management system using SBOM(Software Bill Of Materials) and an improvement for input errors using natural language processing techniques.

A Study on Environmental research Trends by Information and Communications Technologies using Text-mining Technology (텍스트 마이닝 기법을 이용한 환경 분야의 ICT 활용 연구 동향 분석)

  • Park, Boyoung;Oh, Kwan-Young;Lee, Jung-Ho;Yoon, Jung-Ho;Lee, Seung Kuk;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.189-199
    • /
    • 2017
  • Thisstudy quantitatively analyzed the research trendsin the use ofICT ofthe environmental field using the text mining technique. To that end, the study collected 359 papers published in the past two decades(1996-2015)from the National Digital Science Library (NDSL) using 38 environment-related keywords and 16 ICT-related keywords. It processed the natural languages of the environment and ICT fields in the papers and reorganized the classification system into the unit of corpus. It conducted the text mining analysis techniques of frequency analysis, keyword analysis and the association rule analysis of keywords, based on the above-mentioned keywords of the classification system. As a result, the frequency of the keywords of 'general environment' and 'climate' accounted for 77 % of the total proportion and the keywords of 'public convergence service' and 'industrial convergence service' in the ICT field took up approximately 30 % of the total proportion. According to the time series analysis, the researches using ICT in the environmental field rapidly increased over the past 5 years (2011-2015) and the number of such researches more than doubled compared to the past (1996-2010). Based on the environmental field with generated association rules among the keywords, it was identified that the keyword 'general environment' was using 16 ICT-based technologies and 'climate' was using 14 ICT-based technologies.

Analysis of ICT Education Trends using Keyword Occurrence Frequency Analysis and CONCOR Technique (키워드 출현 빈도 분석과 CONCOR 기법을 이용한 ICT 교육 동향 분석)

  • Youngseok Lee
    • Journal of Industrial Convergence
    • /
    • v.21 no.1
    • /
    • pp.187-192
    • /
    • 2023
  • In this study, trends in ICT education were investigated by analyzing the frequency of appearance of keywords related to machine learning and using conversion of iteration correction(CONCOR) techniques. A total of 304 papers from 2018 to the present published in registered sites were searched on Google Scalar using "ICT education" as the keyword, and 60 papers pertaining to ICT education were selected based on a systematic literature review. Subsequently, keywords were extracted based on the title and summary of the paper. For word frequency and indicator data, 49 keywords with high appearance frequency were extracted by analyzing frequency, via the term frequency-inverse document frequency technique in natural language processing, and words with simultaneous appearance frequency. The relationship degree was verified by analyzing the connection structure and centrality of the connection degree between words, and a cluster composed of words with similarity was derived via CONCOR analysis. First, "education," "research," "result," "utilization," and "analysis" were analyzed as main keywords. Second, by analyzing an N-GRAM network graph with "education" as the keyword, "curriculum" and "utilization" were shown to exhibit the highest correlation level. Third, by conducting a cluster analysis with "education" as the keyword, five groups were formed: "curriculum," "programming," "student," "improvement," and "information." These results indicate that practical research necessary for ICT education can be conducted by analyzing ICT education trends and identifying trends.