• Title/Summary/Keyword: natural language processing(NLP)

Search Result 158, Processing Time 0.021 seconds

Korean Natural Language Processing Platform for Linked Data (Linked Data를 위한 한국어 자연언어처리 플랫폼)

  • Hahm, YoungGyun;Lim, Kyungtae;Rezk, Martin;Park, Jungyeul;Yoon, Yongun;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.16-20
    • /
    • 2012
  • 본 논문에서는 한국어 자연언어처리를 위해 형태소분석기와 구구조 구문분석기와 의존구조 구문분석기를 통합한 하나의 플랫폼을 제공하고, 외국의 다양한 자연언어처리 도구들의 결과물과의 국제적 상호운용성 및 Linked Data를 위한 RDF 형태로의 변환 시스템을 제시한다.

  • PDF

Acquisition of Named-Entity-Related Relations for Searching

  • Nguyen, Tri-Thanh;Shimazu, Akira
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.349-357
    • /
    • 2007
  • Named entities (NEs) are important in many Natural Language Processing (NLP) applications, and discovering NE-related relations in texts may be beneficial for these applications. This paper proposes a method to extract the ISA relation between a "named entity" and its category, and an IS-RELATED-TO relation between the category and its related object. Based on the pattern extraction algorithm "Person Category Extraction" (PCE), we extend it for solving our problem. Our experiments on Wall Street Journal (WSJ) corpus show promising results. We also demonstrate a possible application of these relations by utilizing them for semantic search.

  • PDF

A Comparative Analysis of Research Trends in the Information and Communication Technology Field of South and North Korea Using Data Mining

  • Jiwan Kim;Hyunkyoo Choi;Jeonghoon Mo
    • Journal of Information Science Theory and Practice
    • /
    • v.11 no.1
    • /
    • pp.14-30
    • /
    • 2023
  • The purpose of this study is to compare research trends in the information and communication technology (ICT) field between North and South Korea and analyze the differences by using data mining. Frequency analysis, clustering, and network analysis were performed using keywords from seven South Korean and two North Korean ICT academic journals published for five years (2015-2019). In the case of South Korea (S. Korea), the frequency of research on image processing and wireless communication was high at 16.7% and 16.3%, respectively. North Korea (N. Korea) had a high frequency of research, in the order of 18.2% for image processing, 16.9% for computer/Internet applications/security, and 16.4% for industrial technology. N. Korea's natural language processing (NLP) sector was 11.9%, far higher than S. Korea's 0.7 percent. Student education is a unique subject that is not clustered in S. Korea. In order to promote exchanges between the two Koreas in the ICT field, the following specific policies are proposed. Joint research will be easily possible in the image processing sector, with the highest research rate in both Koreas. Technical cooperation of medical images is required. If S. Korea's high-quality image source is provided free of charge to N. Korea, research materials can be enriched. In the field of NLP, it calls for proposing exchanges such as holding a Korean language information conference, developing a Korean computer operating system. The field of student education encourages support for remote education contents and management know-how, as well as joint research on student remote evaluation.

CNN-based Skip-Gram Method for Improving Classification Accuracy of Chinese Text

  • Xu, Wenhua;Huang, Hao;Zhang, Jie;Gu, Hao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6080-6096
    • /
    • 2019
  • Text classification is one of the fundamental techniques in natural language processing. Numerous studies are based on text classification, such as news subject classification, question answering system classification, and movie review classification. Traditional text classification methods are used to extract features and then classify them. However, traditional methods are too complex to operate, and their accuracy is not sufficiently high. Recently, convolutional neural network (CNN) based one-hot method has been proposed in text classification to solve this problem. In this paper, we propose an improved method using CNN based skip-gram method for Chinese text classification and it conducts in Sogou news corpus. Experimental results indicate that CNN with the skip-gram model performs more efficiently than CNN-based one-hot method.

MSFM: Multi-view Semantic Feature Fusion Model for Chinese Named Entity Recognition

  • Liu, Jingxin;Cheng, Jieren;Peng, Xin;Zhao, Zeli;Tang, Xiangyan;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1833-1848
    • /
    • 2022
  • Named entity recognition (NER) is an important basic task in the field of Natural Language Processing (NLP). Recently deep learning approaches by extracting word segmentation or character features have been proved to be effective for Chinese Named Entity Recognition (CNER). However, since this method of extracting features only focuses on extracting some of the features, it lacks textual information mining from multiple perspectives and dimensions, resulting in the model not being able to fully capture semantic features. To tackle this problem, we propose a novel Multi-view Semantic Feature Fusion Model (MSFM). The proposed model mainly consists of two core components, that is, Multi-view Semantic Feature Fusion Embedding Module (MFEM) and Multi-head Self-Attention Mechanism Module (MSAM). Specifically, the MFEM extracts character features, word boundary features, radical features, and pinyin features of Chinese characters. The acquired font shape, font sound, and font meaning features are fused to enhance the semantic information of Chinese characters with different granularities. Moreover, the MSAM is used to capture the dependencies between characters in a multi-dimensional subspace to better understand the semantic features of the context. Extensive experimental results on four benchmark datasets show that our method improves the overall performance of the CNER model.

AI-Based Project Similarity Evaluation Model Using Project Scope Statements

  • Ko, Taewoo;Jeong, H. David;Lee, JeeHee
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.284-291
    • /
    • 2022
  • Historical data from comparable projects can serve as benchmarking data for an ongoing project's planning during the project scoping phase. As project owners typically store substantial amounts of data generated throughout project life cycles in digitized databases, they can capture appropriate data to support various project planning activities by accessing digital databases. One of the most important work tasks in this process is identifying one or more past projects comparable to a new project. The uniqueness and complexity of construction projects along with unorganized data, impede the reliable identification of comparable past projects. A project scope document provides the preliminary overview of a project in terms of the extent of the project and project requirements. However, narratives and free-formatted descriptions of project scopes are a significant and time-consuming barrier if a human needs to review them and determine similar projects. This study proposes an Artificial Intelligence-driven model for analyzing project scope descriptions and evaluating project similarity using natural language processing (NLP) techniques. The proposed algorithm can intelligently a) extract major work activities from unstructured descriptions held in a database and b) quantify similarities by considering the semantic features of texts representing work activities. The proposed model enhances historical comparable project identification by systematically analyzing project scopes.

  • PDF

AUTOMATED HAZARD IDENTIFICATION FRAMEWORK FOR THE PROACTIVE CONSIDERATION OF CONSTRUCTION SAFETY

  • JunHyuk Kwon;Byungil Kim;SangHyun Lee;Hyoungkwan Kim
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.60-65
    • /
    • 2013
  • Introducing the concept of construction safety in the design/engineering phase can improve the efficiency and effectiveness of safety management on construction sites. In this sense, further improvements for safety can be made in the design/engineering phase through the development of (1) an automated hazard identification process that is little dependent on user knowledge, (2) an automated construction schedule generation to accommodate varying hazard information over time, and (3) a visual representation of the results that is easy to understand. In this paper, we formulate an automated hazard identification framework for construction safety by extracting hazard information from related regulations to eliminate human interventions, and by utilizing a visualization technique in order to enhance users' understanding on hazard information. First, the hazard information is automatically extracted from textual safety and health regulations (i.e., Occupational Safety Health Administration (OSHA) Standards) by using natural language processing (NLP) techniques without users' interpretations. Next, scheduling and sequencing of the construction activities are automatically generated with regard to the 3D building model. Then, the extracted hazard information is integrated into the geometry data of construction elements in the industry foundation class (IFC) building model using a conformity-checking algorithm within the open source 3D computer graphics software. Preliminary results demonstrate that this approach is advantageous in that it can be used in the design/engineering phases of construction without the manual interpretation of safety experts, facilitating the designers' and engineers' proactive consideration for improving safety management.

  • PDF

CORRECT? CORECT!: Classification of ESG Ratings with Earnings Call Transcript

  • Haein Lee;Hae Sun Jung;Heungju Park;Jang Hyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.1090-1100
    • /
    • 2024
  • While the incorporating ESG indicator is recognized as crucial for sustainability and increased firm value, inconsistent disclosure of ESG data and vague assessment standards have been key challenges. To address these issues, this study proposes an ambiguous text-based automated ESG rating strategy. Earnings Call Transcript data were classified as E, S, or G using the Refinitiv-Sustainable Leadership Monitor's over 450 metrics. The study employed advanced natural language processing techniques such as BERT, RoBERTa, ALBERT, FinBERT, and ELECTRA models to precisely classify ESG documents. In addition, the authors computed the average predicted probabilities for each label, providing a means to identify the relative significance of different ESG factors. The results of experiments demonstrated the capability of the proposed methodology in enhancing ESG assessment criteria established by various rating agencies and highlighted that companies primarily focus on governance factors. In other words, companies were making efforts to strengthen their governance framework. In conclusion, this framework enables sustainable and responsible business by providing insight into the ESG information contained in Earnings Call Transcript data.

An Artificial Intelligence Approach for Word Semantic Similarity Measure of Hindi Language

  • Younas, Farah;Nadir, Jumana;Usman, Muhammad;Khan, Muhammad Attique;Khan, Sajid Ali;Kadry, Seifedine;Nam, Yunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2049-2068
    • /
    • 2021
  • AI combined with NLP techniques has promoted the use of Virtual Assistants and have made people rely on them for many diverse uses. Conversational Agents are the most promising technique that assists computer users through their operation. An important challenge in developing Conversational Agents globally is transferring the groundbreaking expertise obtained in English to other languages. AI is making it possible to transfer this learning. There is a dire need to develop systems that understand secular languages. One such difficult language is Hindi, which is the fourth most spoken language in the world. Semantic similarity is an important part of Natural Language Processing, which involves applications such as ontology learning and information extraction, for developing conversational agents. Most of the research is concentrated on English and other European languages. This paper presents a Corpus-based word semantic similarity measure for Hindi. An experiment involving the translation of the English benchmark dataset to Hindi is performed, investigating the incorporation of the corpus, with human and machine similarity ratings. A significant correlation to the human intuition and the algorithm ratings has been calculated for analyzing the accuracy of the proposed similarity measures. The method can be adapted in various applications of word semantic similarity or module for any other language.

A Design of Stress Measurement System using Facial and Verbal Sentiment Analysis (표정과 언어 감성 분석을 통한 스트레스 측정시스템 설계)

  • Yuw, Suhwa;Chun, Jiwon;Lee, Aejin;Kim, Yoonhee
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.35-47
    • /
    • 2021
  • Various stress exists in a modern society, which requires constant competition and improvement. A person under stress often shows his pressure in his facial expression and language. Therefore, it is possible to measure the pressure using facial expression and language analysis. The paper proposes a stress measurement system using facial expression and language sensitivity analysis. The method analyzes the person's facial expression and language sensibility to derive the stress index based on the main emotional value and derives the integrated stress index based on the consistency of facial expression and language. The quantification and generalization of stress measurement enables many researchers to evaluate the stress index objectively in general.