• Title/Summary/Keyword: natural language processing(NLP)

Search Result 156, Processing Time 0.025 seconds

The Influence and Impact of syntactic-grammatical knowledge on the Phonetic Outputs of a 'Reading Machine' (통사문법적 지식이 '독서기계'의 음성출력에 미치는 영향과 중요성)

  • Hong, Sungshim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.225-230
    • /
    • 2020
  • This paper highlights the influence and the importance of the syntactic-grammatical knowledge on "the reading machine", appeared in Jackendoff (1999). Due to the lack of the detailed testing and implementation in his research, this paper tests an extensive data array using a component of Google Translate, currently available freely and most widely on the internet. Although outdated, Jackendoff's paper, "Why can't Computers use English?", argues that syntactic-grammatical knowledge plays a key role in the outputs of computers and computer-based reading machines. The current research has implemented some testings of his thought-provoking examples, in order to find out whether Google Translate can handle the same problems after two decades or so. As a result, it is argued that in the field of NLP, I-language in the sense of Chomsky (1986, 1995 etc) is real and the syntactic, grammatical, and categorial knowledge is essential in the faculty of language. Therefore, it is reassured in this paper that when it comes to human language, even the most advanced "machine" is still no match for human faculty of language, the syntactic-grammatical knowledge.

An Application of RASA Technology to Design an AI Virtual Assistant: A Case of Learning Finance and Banking Terms in Vietnamese

  • PHAM, Thi My Ni;PHAM, Thi Ngoc Thao;NGUYEN, Ha Phuong Truc;LY, Bao Tuyen;NGUYEN, Truc Linh;LE, Hoanh Su
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.5
    • /
    • pp.273-283
    • /
    • 2022
  • Banking and finance is a broad term that incorporates a variety of smaller, more specialized subjects such as corporate finance, tax finance, and insurance finance. A virtual assistant that assists users in searching for information about banking and finance terms might be an extremely beneficial tool for users. In this study, we explored the process of searching for information, seeking opportunities, and developing a virtual assistant in the first stages of starting learning and understanding Vietnamese to increase effectiveness and save time, which is also an innovative business practice in Use-case Vietnam. We built the FIBA2020 dataset and proposed a pipeline that used Natural Language Processing (NLP) inclusive of Natural Language Understanding (NLU) algorithms to build chatbot applications. The open-source framework RASA is used to implement the system in our study. We aim to improve our model performance by replacing parts of RASA's default tokenizers with Vietnamese tokenizers and experimenting with various language models. The best accuracy we achieved is 86.48% and 70.04% in the ideal condition and worst condition, respectively. Finally, we put our findings into practice by creating an Android virtual assistant application using the model trained using Whitespace tokenizer and the pre-trained language m-BERT.

An Algorithm for Predicting the Relationship between Lemmas and Corpus Size

  • Yang, Dan-Hee;Gomez, Pascual Cantos;Song, Man-Suk
    • ETRI Journal
    • /
    • v.22 no.2
    • /
    • pp.20-31
    • /
    • 2000
  • Much research on natural language processing (NLP), computational linguistics and lexicography has relied and depended on linguistic corpora. In recent years, many organizations around the world have been constructing their own large corporal to achieve corpus representativeness and/or linguistic comprehensiveness. However, there is no reliable guideline as to how large machine readable corpus resources should be compiled to develop practical NLP software and/or complete dictionaries for humans and computational use. In order to shed some new light on this issue, we shall reveal the flaws of several previous researches aiming to predict corpus size, especially those using pure regression or curve-fitting methods. To overcome these flaws, we shall contrive a new mathematical tool: a piecewise curve-fitting algorithm, and next, suggest how to determine the tolerance error of the algorithm for good prediction, using a specific corpus. Finally, we shall illustrate experimentally that the algorithm presented is valid, accurate and very reliable. We are confident that this study can contribute to solving some inherent problems of corpus linguistics, such as corpus predictability, compiling methodology, corpus representativeness and linguistic comprehensiveness.

  • PDF

Towards cross-platform interoperability for machine-assisted text annotation

  • de Castilho, Richard Eckart;Ide, Nancy;Kim, Jin-Dong;Klie, Jan-Christoph;Suderman, Keith
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.19.1-19.10
    • /
    • 2019
  • In this paper, we investigate cross-platform interoperability for natural language processing (NLP) and, in particular, annotation of textual resources, with an eye toward identifying the design elements of annotation models and processes that are particularly problematic for, or amenable to, enabling seamless communication across different platforms. The study is conducted in the context of a specific annotation methodology, namely machine-assisted interactive annotation (also known as human-in-the-loop annotation). This methodology requires the ability to freely combine resources from different document repositories, access a wide array of NLP tools that automatically annotate corpora for various linguistic phenomena, and use a sophisticated annotation editor that enables interactive manual annotation coupled with on-the-fly machine learning. We consider three independently developed platforms, each of which utilizes a different model for representing annotations over text, and each of which performs a different role in the process.

Generative Linguistic Steganography: A Comprehensive Review

  • Xiang, Lingyun;Wang, Rong;Yang, Zhongliang;Liu, Yuling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.986-1005
    • /
    • 2022
  • Text steganography is one of the most imminent and promising research interests in the information security field. With the unprecedented success of the neural network and natural language processing (NLP), the last years have seen a surge of research on generative linguistic steganography (GLS). This paper provides a thorough and comprehensive review to summarize the existing key contributions, and creates a novel taxonomy for GLS according to NLP techniques and steganographic encoding algorithm, then summarizes the characteristics of generative linguistic steganographic methods properly to analyze the relationship and difference between each type of them. Meanwhile, this paper also comprehensively introduces and analyzes several evaluation metrics to evaluate the performance of GLS from diverse perspective. Finally, this paper concludes the future research work, which is more conducive to the follow-up research and innovation of researchers.

A study on Korean language processing using TF-IDF (TF-IDF를 활용한 한글 자연어 처리 연구)

  • Lee, Jong-Hwa;Lee, MoonBong;Kim, Jong-Weon
    • The Journal of Information Systems
    • /
    • v.28 no.3
    • /
    • pp.105-121
    • /
    • 2019
  • Purpose One of the reasons for the expansion of information systems in the enterprise is the increased efficiency of data analysis. In particular, the rapidly increasing data types which are complex and unstructured such as video, voice, images, and conversations in and out of social networks. The purpose of this study is the customer needs analysis from customer voices, ie, text data, in the web environment.. Design/methodology/approach As previous study results, the word frequency of the sentence is extracted as a word that interprets the sentence has better affects than frequency analysis. In this study, we applied the TF-IDF method, which extracts important keywords in real sentences, not the TF method, which is a word extraction technique that expresses sentences with simple frequency only, in Korean language research. We visualized the two techniques by cluster analysis and describe the difference. Findings TF technique and TF-IDF technique are applied for Korean natural language processing, the research showed the value from frequency analysis technique to semantic analysis and it is expected to change the technique by Korean language processing researcher.

Enhancing the Text Mining Process by Implementation of Average-Stochastic Gradient Descent Weight Dropped Long-Short Memory

  • Annaluri, Sreenivasa Rao;Attili, Venkata Ramana
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.352-358
    • /
    • 2022
  • Text mining is an important process used for analyzing the data collected from different sources like videos, audio, social media, and so on. The tools like Natural Language Processing (NLP) are mostly used in real-time applications. In the earlier research, text mining approaches were implemented using long-short memory (LSTM) networks. In this paper, text mining is performed using average-stochastic gradient descent weight-dropped (AWD)-LSTM techniques to obtain better accuracy and performance. The proposed model is effectively demonstrated by considering the internet movie database (IMDB) reviews. To implement the proposed model Python language was used due to easy adaptability and flexibility while dealing with massive data sets/databases. From the results, it is seen that the proposed LSTM plus weight dropped plus embedding model demonstrated an accuracy of 88.36% as compared to the previous models of AWD LSTM as 85.64. This result proved to be far better when compared with the results obtained by just LSTM model (with 85.16%) accuracy. Finally, the loss function proved to decrease from 0.341 to 0.299 using the proposed model

Trends in Deep Learning-based Medical Optical Character Recognition (딥러닝 기반의 의료 OCR 기술 동향)

  • Sungyeon Yoon;Arin Choi;Chaewon Kim;Sumin Oh;Seoyoung Sohn;Jiyeon Kim;Hyunhee Lee;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.453-458
    • /
    • 2024
  • Optical Character Recognition is the technology that recognizes text in images and converts them into digital format. Deep learning-based OCR is being used in many industries with large quantities of recorded data due to its high recognition performance. To improve medical services, deep learning-based OCR was actively introduced by the medical industry. In this paper, we discussed trends in OCR engines and medical OCR and provided a roadmap for development of medical OCR. By using natural language processing on detected text data, current medical OCR has improved its recognition performance. However, there are limits to the recognition performance, especially for non-standard handwriting and modified text. To develop advanced medical OCR, databaseization of medical data, image pre-processing, and natural language processing are necessary.

국가연구개발사업 평가에서 사회연결망 분석 활용 방안

  • Gi, Ji-Hun
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2017.11a
    • /
    • pp.129-129
    • /
    • 2017
  • In planning and evaluating government R&D programs, one of the first steps is to understand the government's current R&D investment portfolio - which fields or topics the government is now investing in in R&D. Analysis methods of an investment portfolio of government R&D tend traditionally to rely on keyword searches or ad-hoc two-dimensional classifications. The main drawback of these approaches is their limited ability to account for the characteristics of the whole government investment in R&D and the role of individual R&D program in it, which tends to depend on the relationship with other programs. This paper suggests a new method for mapping and analyzing government investment in R&D using a combination of methods from natural language processing (NLP) and network analysis. The NLP enables us to build a network of government R&D programs whose links are defined as similarity in R&D topics. Then methods from network analysis show the characteristics of government investment in R&D, including major investment fields, unexplored topics, and key R&D programs which play a role like a hub or a bridge in the network of R&D programs, which are difficult to be identified by conventional methods. These insights can be utilized in planning a new R&D program, in reviewing its proposal, or in evaluating the performance of R&D programs. The utilized (filtered) Korean text corpus consists of hundreds of R&D program descriptions in the budget requests for fiscal year 2017 submitted by government departments to the Korean Ministry of Strategy and Finance.

  • PDF

Korean Natural Language Processing Platform for Linked Data (Linked Data를 위한 한국어 자연언어처리 플랫폼)

  • Hahm, YoungGyun;Lim, Kyungtae;Rezk, Martin;Park, Jungyeul;Yoon, Yongun;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.16-20
    • /
    • 2012
  • 본 논문에서는 한국어 자연언어처리를 위해 형태소분석기와 구구조 구문분석기와 의존구조 구문분석기를 통합한 하나의 플랫폼을 제공하고, 외국의 다양한 자연언어처리 도구들의 결과물과의 국제적 상호운용성 및 Linked Data를 위한 RDF 형태로의 변환 시스템을 제시한다.

  • PDF