• Title/Summary/Keyword: NLP application

Search Result 35, Processing Time 0.022 seconds

Data Analysis Web Application Based on Text Mining (텍스트 마이닝 기반의 데이터 분석 웹 애플리케이션)

  • Gil, Wan-Je;Kim, Jae-Woong;Park, Koo-Rack;Lee, Yun-Yeol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.103-104
    • /
    • 2021
  • 본 논문에서는 텍스트 마이닝 기반의 토픽 모델링 웹 애플리케이션 모델을 제안한다. 웹크롤링 기법을 활용하여 키워드를 입력하면 요약된 논문 정보를 파일로 저장할 수 있고 또한 키워드 빈도 분석과 토픽 모델링 등을 통해 연구 동향을 손쉽게 확인해볼 수 있는 웹 애플리케이션을 설계하고 구현하는 것을 목표로 한다. 제안 모델인 웹 애플리케이션을 통해 프로그래밍 언어와 데이터 분석 기법에 대한 지식이 부족하더라도 논문 수집과 저장, 텍스트 분석을 경험해볼 수 있다. 또한, 이러한 웹 시스템 개발은 기존의 html, css, java script와 같은 언어에 의존하지 않고 파이썬 라이브러리를 활용하였기 때문에 파이썬을 기반으로 데이터 분석과 머신러닝 교육을 수행할 경우 프로젝트 기반 수업 교육 과정으로 채택이 가능할 것으로 기대된다.

  • PDF

Incorporating BERT-based NLP and Transformer for An Ensemble Model and its Application to Personal Credit Prediction

  • Sophot Ky;Ju-Hong Lee;Kwangtek Na
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.9-15
    • /
    • 2024
  • Tree-based algorithms have been the dominant methods used build a prediction model for tabular data. This also includes personal credit data. However, they are limited to compatibility with categorical and numerical data only, and also do not capture information of the relationship between other features. In this work, we proposed an ensemble model using the Transformer architecture that includes text features and harness the self-attention mechanism to tackle the feature relationships limitation. We describe a text formatter module, that converts the original tabular data into sentence data that is fed into FinBERT along with other text features. Furthermore, we employed FT-Transformer that train with the original tabular data. We evaluate this multi-modal approach with two popular tree-based algorithms known as, Random Forest and Extreme Gradient Boosting, XGBoost and TabTransformer. Our proposed method shows superior Default Recall, F1 score and AUC results across two public data sets. Our results are significant for financial institutions to reduce the risk of financial loss regarding defaulters.

KOREAN TOPIC MODELING USING MATRIX DECOMPOSITION

  • June-Ho Lee;Hyun-Min Kim
    • East Asian mathematical journal
    • /
    • v.40 no.3
    • /
    • pp.307-318
    • /
    • 2024
  • This paper explores the application of matrix factorization, specifically CUR decomposition, in the clustering of Korean language documents by topic. It addresses the unique challenges of Natural Language Processing (NLP) in dealing with the Korean language's distinctive features, such as agglutinative words and morphological ambiguity. The study compares the effectiveness of Latent Semantic Analysis (LSA) using CUR decomposition with the classical Singular Value Decomposition (SVD) method in the context of Korean text. Experiments are conducted using Korean Wikipedia documents and newspaper data, providing insight into the accuracy and efficiency of these techniques. The findings demonstrate the potential of CUR decomposition to improve the accuracy of document clustering in Korean, offering a valuable approach to text mining and information retrieval in agglutinative languages.

Multi Parameter Design in AIML Framework for Balinese Calendar Knowledge Access

  • Sukarsa, I Made;Buana, Putu Wira;Yogantara, Urip
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.114-130
    • /
    • 2020
  • Balinese calendar is defined as a unique calendar system for combining solar-based and lunar-based system and assuming local system. It is considered as guidance of Balinese societies' activities management, starting from meeting arrangement, wedding ceremony, to religious ceremonies. Practically, it has developed in the form of printed Balinese calendar and electronic Balinese calendar, either web or mobile application. The core of the function is to find out the day with its various characteristics in the Balinese Calendar. In general, society usually asks the religious leader to find out the day in detail. The technology of NLP combined with models of pattern discoveries supports the arrangement of the interaction model in searching the good day in Balinese Calendar to equip the conventional searching system in the previous applications. This study will design a dialog model with AIML method in multi-parameter basis; therefore, the users will be dynamically able to use the searching content in various ways by chatting in similar with consulting to a religious leader. This model will be applied in a chatbot basis service in telegram machine. The addition of the context recognition section into 4 paterns has been successfully improve the ability of AIML to recognize input patterns with many criteria. Based on the testing with 50 random input patterns obtained a success rate of 92.5%.

SOPPY : A sentiment detection tool for personal online retailing

  • Sidek, Nurliyana Jaafar;Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.9 no.3
    • /
    • pp.59-69
    • /
    • 2017
  • The best 'hub' to communicate with the citizen is using social media to marketing the business. However, there has several issued and the most common issue that face in critical is a capital issue. This issue is always highlight because most of automatic sentiment detection tool for Facebook or any other social media price is expensive and they lack of technical skills in order to control the tool. Therefore, in directly they have some obstacle to get faster product's feedback from customers. Thus, the personal online retailing need to struggle to stay in market because they need to compete with successful online company such as G-market. Sentiment analysis also known as opinion mining. Aim of this research is develop the tool that allow user to automatic detect the sentiment comment on social media account. RAD model methodology is chosen since its have several phases could produce more activities and output. Soppy tool will be develop using Microsoft Visual. In order to generate an accurate sentiment detection, the functionality testing will be use to find the effectiveness of this Soppy tool. This proposed automated Soppy Tool would be able to provide a platform to measure the impact of the customer sentiment over the postings on their social media site. The results and findings from the impact measurement could then be use as a recommendation in the developing or reviewing to enhance the capability and the profit to their personal online retailing company.

OryzaGP: rice gene and protein dataset for named-entity recognition

  • Larmande, Pierre;Do, Huy;Wang, Yue
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.17.1-17.3
    • /
    • 2019
  • Text mining has become an important research method in biology, with its original purpose to extract biological entities, such as genes, proteins and phenotypic traits, to extend knowledge from scientific papers. However, few thorough studies on text mining and application development, for plant molecular biology data, have been performed, especially for rice, resulting in a lack of datasets available to solve named-entity recognition tasks for this species. Since there are rare benchmarks available for rice, we faced various difficulties in exploiting advanced machine learning methods for accurate analysis of the rice literature. To evaluate several approaches to automatically extract information from gene/protein entities, we built a new dataset for rice as a benchmark. This dataset is composed of a set of titles and abstracts, extracted from scientific papers focusing on the rice species, and is downloaded from PubMed. During the 5th Biomedical Linked Annotation Hackathon, a portion of the dataset was uploaded to PubAnnotation for sharing. Our ultimate goal is to offer a shared task of rice gene/protein name recognition through the BioNLP Open Shared Tasks framework using the dataset, to facilitate an open comparison and evaluation of different approaches to the task.

SimKoR: A Sentence Similarity Dataset based on Korean Review Data and Its Application to Contrastive Learning for NLP (SimKoR: 한국어 리뷰 데이터를 활용한 문장 유사도 데이터셋 제안 및 대조학습에서의 활용 방안 )

  • Jaemin Kim;Yohan Na;Kangmin Kim;Sang Rak Lee;Dong-Kyu Chae
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.245-248
    • /
    • 2022
  • 최근 자연어 처리 분야에서 문맥적 의미를 반영하기 위한 대조학습 (contrastive learning) 에 대한 연구가 활발히 이뤄지고 있다. 이 때 대조학습을 위한 양질의 학습 (training) 데이터와 검증 (validation) 데이터를 이용하는 것이 중요하다. 그러나 한국어의 경우 대다수의 데이터셋이 영어로 된 데이터를 한국어로 기계 번역하여 검토 후 제공되는 데이터셋 밖에 존재하지 않는다. 이는 기계번역의 성능에 의존하는 단점을 갖고 있다. 본 논문에서는 한국어 리뷰 데이터로 임베딩의 의미 반영 정도를 측정할 수 있는 간단한 검증 데이터셋 구축 방법을 제안하고, 이를 활용한 데이터셋인 SimKoR (Similarity Korean Review dataset) 을 제안한다. 제안하는 검증 데이터셋을 이용해서 대조학습을 수행하고 효과성을 보인다.

  • PDF

Chatbot Design Method Using Hybrid Word Vector Expression Model Based on Real Telemarketing Data

  • Zhang, Jie;Zhang, Jianing;Ma, Shuhao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1400-1418
    • /
    • 2020
  • In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Examining the Disparity between Court's Assessment of Cognitive Impairment and Online Public Perception through Natural Language Processing (NLP): An Empirical Investigation (Natural Language Processing(NLP)를 활용한 법원의 판결과 온라인상 대중 인식간 괴리에 관한 실증 연구)

  • Seungkook Roh
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.11-22
    • /
    • 2023
  • This research aimed to examine the public's perception of the "rate of sentence reduction for reasons of mental and physical weakness" and investigate if it aligns with the actual practice. Various sources, such as the Supreme Court's Courtnet search system, the number of mental evaluation requests, and the number of articles and comments related to "mental weakness" on Naver News were utilized for the analysis. The findings indicate that the public has a negative opinion on reducing sentences due to mental and physical weakness, and they are dissatisfied with the vagueness of the standards. However, this study also confirms that the court strictly applies the reduction of responsibility for individuals with mental disabilities specified in Article 10 of the Criminal Act based on the analysis of actual judgments and the number of requests for psychiatric evaluation. In other words, even though the recognition of perpetrators' mental disorders is declining, the public does not seem to recognize this trend. This creates a negative impact on the public's trust in state institutions. Therefore, law enforcement agencies, such as the police and prosecutors, need to enforce the law according to clear standards to gain public trust. The judiciary also needs to make a firm decision on commuting sentences for mentally and physically infirm individuals and inform the public of the outcomes of its application.