• Title/Summary/Keyword: Natural Language Understanding

Search Result 129, Processing Time 0.027 seconds

Automated Construction Activities Extraction from Accident Reports Using Deep Neural Network and Natural Language Processing Techniques

  • Do, Quan;Le, Tuyen;Le, Chau
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.744-751
    • /
    • 2022
  • Construction is among the most dangerous industries with numerous accidents occurring at job sites. Following an accident, an investigation report is issued, containing all of the specifics. Analyzing the text information in construction accident reports can help enhance our understanding of historical data and be utilized for accident prevention. However, the conventional method requires a significant amount of time and effort to read and identify crucial information. The previous studies primarily focused on analyzing related objects and causes of accidents rather than the construction activities. This study aims to extract construction activities taken by workers associated with accidents by presenting an automated framework that adopts a deep learning-based approach and natural language processing (NLP) techniques to automatically classify sentences obtained from previous construction accident reports into predefined categories, namely TRADE (i.e., a construction activity before an accident), EVENT (i.e., an accident), and CONSEQUENCE (i.e., the outcome of an accident). The classification model was developed using Convolutional Neural Network (CNN) showed a robust accuracy of 88.7%, indicating that the proposed model is capable of investigating the occurrence of accidents with minimal manual involvement and sophisticated engineering. Also, this study is expected to support safety assessments and build risk management systems.

  • PDF

Korean Machine Reading Comprehension for Patent Consultation Using BERT (BERT를 이용한 한국어 특허상담 기계독해)

  • Min, Jae-Ok;Park, Jin-Woo;Jo, Yu-Jeong;Lee, Bong-Gun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.145-152
    • /
    • 2020
  • MRC (Machine reading comprehension) is the AI NLP task that predict the answer for user's query by understanding of the relevant document and which can be used in automated consult services such as chatbots. Recently, the BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding) model, which shows high performance in various fields of natural language processing, have two phases. First phase is Pre-training the big data of each domain. And second phase is fine-tuning the model for solving each NLP tasks as a prediction. In this paper, we have made the Patent MRC dataset and shown that how to build the patent consultation training data for MRC task. And we propose the method to improve the performance of the MRC task using the Pre-trained Patent-BERT model by the patent consultation corpus and the language processing algorithm suitable for the machine learning of the patent counseling data. As a result of experiment, we show that the performance of the method proposed in this paper is improved to answer the patent counseling query.

Natural Language Queries for Music Information Retrieval (음악정보 검색에서 이용자 자연어 질의의 정확성 연구)

  • Lee, Jin-Ha
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.149-164
    • /
    • 2008
  • Our limited understanding of real-life music information queries is an impediment to developing music information retrieval (MIR) systems that meet the needs of real users. This study aims to contribute to developing a theorized understanding of how people seek music information by an empirical investigation of real-life queries, in particular, focusing on the accuracy of user-provided information and users' uncertainty expressions. This study found that much of users' information is inaccurate; users made various syntactic and semantic errors in providing this information. Despite these inaccuracies and uncertainties, many queries were successful in eliciting correct answers. A theory from pragmatics is suggested as a partial explanation for the unexpected success of inaccurate queries.

Chinese Multi-domain Task-oriented Dialogue System based on Paddle (Paddle 기반의 중국어 Multi-domain Task-oriented 대화 시스템)

  • Deng, Yuchen;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.308-310
    • /
    • 2022
  • With the rise of the Al wave, task-oriented dialogue systems have become one of the popular research directions in academia and industry. Currently, task-oriented dialogue systems mainly adopt pipelined form, which mainly includes natural language understanding, dialogue state decision making, dialogue state tracking and natural language generation. However, pipelining is prone to error propagation, so many task-oriented dialogue systems in the market are only for single-round dialogues. Usually single- domain dialogues have relatively accurate semantic understanding, while they tend to perform poorly on multi-domain, multi-round dialogue datasets. To solve these issues, we developed a paddle-based multi-domain task-oriented Chinese dialogue system. It is based on NEZHA-base pre-training model and CrossWOZ dataset, and uses intention recognition module, dichotomous slot recognition module and NER recognition module to do DST and generate replies based on rules. Experiments show that the dialogue system not only makes good use of the context, but also effectively addresses long-term dependencies. In our approach, the DST of dialogue tracking state is improved, and our DST can identify multiple slotted key-value pairs involved in the discourse, which eliminates the need for manual tagging and thus greatly saves manpower.

PASS: A Parallel Speech Understanding System

  • Chung, Sang-Hwa
    • Journal of Electrical Engineering and information Science
    • /
    • v.1 no.1
    • /
    • pp.1-9
    • /
    • 1996
  • A key issue in spoken language processing has become the integration of speech understanding and natural language processing(NLP). This paper presents a parallel computational model for the integration of speech and NLP. The model adopts a hierarchically-structured knowledge base and memory-based parsing techniques. Processing is carried out by passing multiple markers in parallel through the knowledge base. Speech-specific problems such as insertion, deletion, and substitution have been analyzed and their parallel solutions are provided. The complete system has been implemented on the Semantic Network Array Processor(SNAP) and is operational. Results show an 80% sentence recognition rate for the Air Traffic Control domain. Moreover, a 15-fold speed-up can be obtained over an identical sequential implementation with an increasing speed advantage as the size of the knowledge base grows.

  • PDF

Using Syntax and Shallow Semantic Analysis for Vietnamese Question Generation

  • Phuoc Tran;Duy Khanh Nguyen;Tram Tran;Bay Vo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2718-2731
    • /
    • 2023
  • This paper presents a method of using syntax and shallow semantic analysis for Vietnamese question generation (QG). Specifically, our proposed technique concentrates on investigating both the syntactic and shallow semantic structure of each sentence. The main goal of our method is to generate questions from a single sentence. These generated questions are known as factoid questions which require short, fact-based answers. In general, syntax-based analysis is one of the most popular approaches within the QG field, but it requires linguistic expert knowledge as well as a deep understanding of syntax rules in the Vietnamese language. It is thus considered a high-cost and inefficient solution due to the requirement of significant human effort to achieve qualified syntax rules. To deal with this problem, we collected the syntax rules in Vietnamese from a Vietnamese language textbook. Moreover, we also used different natural language processing (NLP) techniques to analyze Vietnamese shallow syntax and semantics for the QG task. These techniques include: sentence segmentation, word segmentation, part of speech, chunking, dependency parsing, and named entity recognition. We used human evaluation to assess the credibility of our model, which means we manually generated questions from the corpus, and then compared them with the generated questions. The empirical evidence demonstrates that our proposed technique has significant performance, in which the generated questions are very similar to those which are created by humans.

Data Augmentation using Large Language Model for English Education (영어 교육을 위한 거대 언어 모델 활용 말뭉치 확장 프레임워크)

  • Jinwoo Jung;Sangkeun Jung
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.698-703
    • /
    • 2023
  • 최근 ChatGPT와 같은 사전학습 생성모델은 자연어 이해 (natural language understanding)에서 좋은 성능을 보이고 있다. 또한 코드 작업을 도와주고 대학수학능력시험, 중고등학교 수준의 문제를 풀거나 도와주는 다양한 분야에서 활용되고 있다. 본 논문은 사전학습 생성모델을 이용하여 영어 교육을 위해 말뭉치를 확장하는 프레임 워크를 제시한다. 이를 위해 ChatGPT를 사용해 말뭉치를 확장 한 후 의미 유사도, 상황 유사도, 문장 교육 난이도를 사용해 생성된 문장의 교육적 효과를 검증한다.

  • PDF

Grammatical Structure Oriented Automated Approach for Surface Knowledge Extraction from Open Domain Unstructured Text

  • Tissera, Muditha;Weerasinghe, Ruvan
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2022
  • News in the form of web data generates increasingly large amounts of information as unstructured text. The capability of understanding the meaning of news is limited to humans; thus, it causes information overload. This hinders the effective use of embedded knowledge in such texts. Therefore, Automatic Knowledge Extraction (AKE) has now become an integral part of Semantic web and Natural Language Processing (NLP). Although recent literature shows that AKE has progressed, the results are still behind the expectations. This study proposes a method to auto-extract surface knowledge from English news into a machine-interpretable semantic format (triple). The proposed technique was designed using the grammatical structure of the sentence, and 11 original rules were discovered. The initial experiment extracted triples from the Sri Lankan news corpus, of which 83.5% were meaningful. The experiment was extended to the British Broadcasting Corporation (BBC) news dataset to prove its generic nature. This demonstrated a higher meaningful triple extraction rate of 92.6%. These results were validated using the inter-rater agreement method, which guaranteed the high reliability.

A global epidemic notification site using natural language processing (자연어 처리를 활용한 전세계 전염병 알림 사이트)

  • Gwak, Chan-Woo;Kim, Ye-Chan;Choi, Jin-Hwang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.905-908
    • /
    • 2020
  • 본 논문에서는 글로벌화가 진행됨에 따라 전 세계의 재난 경보시스템의 중요성을 인지하고, 현재 유행하고 있는 코로나 바이러스를 중점으로 알림 사이트를 개발하였다. 기존의 정보 제공 사이트들과 차별성을 두고자, 기존의 정보들을 분석하고 재분류하여 새로운 형태의 사이트의 형태를 가진다. 이를 위해 인공지능의 한 분야인 자연어처리를 활용하여 기존의 정보를 수집하고 가공하여, 보다 투명하고, 효율적이고, 가치 있는 정보를 게시한다. 정보의 정확성과 데이터 절감을 위하여 여러 조건을 통해 기존의 정보들을 재분류 작업 이후 WATSON NLU(Natural Language Understanding)를 통해 분석하여, 필요한 정보들을 각 대시보드에 게시한다. 각 대시보드는 NLU분석에서 얻을 수 있는 정보들을 기반으로 구성되어 있으며, 간결성과 가시성을 기반으로 정보를 확인할 수 있는 알림 사이트이다.

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.