• Title/Summary/Keyword: NLP application

Search Result 35, Processing Time 0.02 seconds

An Application of RASA Technology to Design an AI Virtual Assistant: A Case of Learning Finance and Banking Terms in Vietnamese

  • PHAM, Thi My Ni;PHAM, Thi Ngoc Thao;NGUYEN, Ha Phuong Truc;LY, Bao Tuyen;NGUYEN, Truc Linh;LE, Hoanh Su
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.5
    • /
    • pp.273-283
    • /
    • 2022
  • Banking and finance is a broad term that incorporates a variety of smaller, more specialized subjects such as corporate finance, tax finance, and insurance finance. A virtual assistant that assists users in searching for information about banking and finance terms might be an extremely beneficial tool for users. In this study, we explored the process of searching for information, seeking opportunities, and developing a virtual assistant in the first stages of starting learning and understanding Vietnamese to increase effectiveness and save time, which is also an innovative business practice in Use-case Vietnam. We built the FIBA2020 dataset and proposed a pipeline that used Natural Language Processing (NLP) inclusive of Natural Language Understanding (NLU) algorithms to build chatbot applications. The open-source framework RASA is used to implement the system in our study. We aim to improve our model performance by replacing parts of RASA's default tokenizers with Vietnamese tokenizers and experimenting with various language models. The best accuracy we achieved is 86.48% and 70.04% in the ideal condition and worst condition, respectively. Finally, we put our findings into practice by creating an Android virtual assistant application using the model trained using Whitespace tokenizer and the pre-trained language m-BERT.

A Web-Based Multimedia Dictionary System Supporting Media Synchronization (미디어 동기화를 지원하는 웹기반 멀티미디어 전자사전 시스템)

  • Choi, Yong-Jun;Hwang, Do-Sam
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1145-1161
    • /
    • 2004
  • The purpose of this research is to establish a method for the construction of a multimedia electronic dictionary system by integrating the media data available from linguistic resources on the Internet. As the result of this study, existing text-oriented electronic dictionary systems can be developed into multimedia lexical systems with greater efficiency and effectiveness. A method is proposed to integrate the media data of linguistic resources on the Internet by a web browser. In the proposed method, a web browser carries out all the work related to integration of media data, and it does not need a dedicated server system. The system constructed by our web browser environment integrates text, image, and voice sources, and also can produce moving pictures. Each media is associated with the meaning of data so that the data integration and movement may be specified in the associations. SMIL documents are generated by analyzing the meaning of each data unit and they are executed in a web browser. The proposed system can be operated without a dedicated server system. And also, the system saves storage space by sharing the each media data distributed on the Internet, and makes it easier to update data.

  • PDF

Semantic analysis via application of deep learning using Naver movie review data (네이버 영화 리뷰 데이터를 이용한 의미 분석(semantic analysis))

  • Kim, Sojin;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.19-33
    • /
    • 2022
  • With the explosive growth of social media, its abundant text-based data generated by web users has become an important source for data analysis. For example, we often witness online movie reviews from the 'Naver Movie' affecting the general public to decide whether they should watch the movie or not. This study has conducted analysis on the Naver Movie's text-based review data to predict the actual ratings. After examining the distribution of movie ratings, we performed semantics analysis using Korean Natural Language Processing. This research sought to find the best review rating prediction model by comparing machine learning and deep learning models. We also compared various regression and classification models in 2-class and multi-class cases. Lastly we explained the causes of review misclassification related to movie review data characteristics.

Improving a Korean Spell/Grammar Checker for the Web-Based Language Learning System (웹기반 언어 학습시스템을 위한 한국어 철자/문법 검사기의 성능 향상)

  • 남현숙;김광영;권혁철
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.3
    • /
    • pp.1-18
    • /
    • 2001
  • The goal of this paper is the pedagogical application of a Korean Spell/Grammar Checker to the web-based language learning system for Korean writing. To maximize the efficient instruction of our learning system \\`Urimal Baeumteo\\` we have to improve our Korean Spell/Grammar Checker. Today the NLP system\\`s performance defends on its semantic processing capability. In our Korean Spell/Grammar Checker. the tasks accomplished in the semantic level are: the detection and correction of misused derived and compound nouns in a Korean spell-checking device and the detection and correction of syntactic and semantic errors in a Korean grammars-checking device. We describe a common approach to the partial parsing using collocation rules based on the dependency grammar. To provide more detailed semantic rules. we classified nouns according to their concepts. and subcategorized verbs referring to their syntactic and semantic features. Improving a Korean Spell/Gl-Grammar Checker makes our learning system active and intelligent in a web-based environment. We acknowledge the flaws in our system: the classification of nouns based on their meanings and concepts is a time consuming task. the analytic unit of this study is principally limited to the phrases in a sentence therefore the accurate parsing of embedded sentences remains a difficult problem to solve. Concerning the web-based language learning system. it is critically important to consider its interface design and structure of its contents.

  • PDF

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.