• Title/Summary/Keyword: NLTK

Search Result 14, Processing Time 0.028 seconds

KoNLTK: Korean Natural Language Toolkit (KoNLTK: 한국어 언어 처리 도구)

  • Nam, Gyu-Hyeon;Lee, Hyun-Young;Kang, Seung-Shik
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.611-613
    • /
    • 2018
  • KoNLTK는 한국어와 관련된 다양한 언어자원과 언어처리 도구들을 파이썬 플랫폼에서 하나의 인터페이스 환경에서 제공하기 위한 언어처리 플랫폼이다. 형태소 분석기, 개체명 인식기, 의존 구조 파서 등 기초 분석 도구들과 단어 벡터, 감정 분석 등 응용 도구들을 제공하여 한국어 텍스트 분석이 필요한 연구자들의 편의성을 증대시킬 수 있다.

  • PDF

Natural Language Toolkit _ Korean (NLTKo 1.0: 한국어 언어처리 도구)

  • Hong, Seong-Tae;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.554-557
    • /
    • 2021
  • NLTKo는 한국어 분석 도구들을 NLTK에 결합하여 사용할 수 있게 만든 도구이다. NLTKo는 전처리 도구, 토크나이저, 형태소 분석기, 세종 의미사전, 분류 및 기계번역 성능 평가 도구를 추가로 제공한다. 이들은 기존의 NLTK 함수와 동일한 방법으로 사용할 수 있도록 구현하였다. 또한 세종 의미사전을 제공하여 한국어 동의어/반의어, 상/하위어 등을 제공한다. NLTKo는 한국어 자연어처리를 위한 교육에 도움이 될 것으로 믿는다.

  • PDF

Analysis of the Korean Tokenizing Library Module (한글 토크나이징 라이브러리 모듈 분석)

  • Lee, Jae-kyung;Seo, Jin-beom;Cho, Young-bok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.78-80
    • /
    • 2021
  • Currently, research on natural language processing (NLP) is rapidly evolving. Natural language processing is a technology that allows computers to analyze the meanings of languages used in everyday life, and is used in various fields such as speech recognition, spelling tests, and text classification. Currently, the most commonly used natural language processing library is NLTK based on English, which has a disadvantage in Korean language processing. Therefore, after introducing KonLPy and Soynlp, the Korean Tokenizing libraries, we will analyze morphology analysis and processing techniques, compare and analyze modules with Soynlp that complement KonLPy's shortcomings, and use them as natural language processing models.

  • PDF

Stock prediction using combination of BERT sentiment Analysis and Macro economy index

  • Jang, Euna;Choi, HoeRyeon;Lee, HongChul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.47-56
    • /
    • 2020
  • The stock index is used not only as an economic indicator for a country, but also as an indicator for investment judgment, which is why research into predicting the stock index is ongoing. The task of predicting the stock price index involves technical, basic, and psychological factors, and it is also necessary to consider complex factors for prediction accuracy. Therefore, it is necessary to study the model for predicting the stock price index by selecting and reflecting technical and auxiliary factors that affect the fluctuation of the stock price according to the stock price. Most of the existing studies related to this are forecasting studies that use news information or macroeconomic indicators that create market fluctuations, or reflect only a few combinations of indicators. In this paper, this we propose to present an effective combination of the news information sentiment analysis and various macroeconomic indicators in order to predict the US Dow Jones Index. After Crawling more than 93,000 business news from the New York Times for two years, the sentiment results analyzed using the latest natural language processing techniques BERT and NLTK, along with five macroeconomic indicators, gold prices, oil prices, and five foreign exchange rates affecting the US economy Combination was applied to the prediction algorithm LSTM, which is known to be the most suitable for combining numeric and text information. As a result of experimenting with various combinations, the combination of DJI, NLTK, BERT, OIL, GOLD, and EURUSD in the DJI index prediction yielded the smallest MSE value.

Analysis of YouTube's role as a new platform between media and consumers

  • Hur, Tai-Sung;Im, Jung-ju;Song, Da-hye
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.53-60
    • /
    • 2022
  • YouTube realistically shows fake news and biased content based on facts that have not been verified due to low entry barriers and ambiguity in video regulation standards. Therefore, this study aims to analyze the influence of the media and YouTube on individual behavior and their relationship. Data from YouTube and Twitter are randomly imported with selenium, beautiful soup, and Twitter APIs to classify the 31 most frequently mentioned keywords. Based on 31 keywords classified, data were collected from YouTube, Twitter, and Naver News, and positive, negative, and neutral emotions were classified and quantified with NLTK's Natural Language Toolkit (NLTK) Vader model and used as analysis data. As a result of analyzing the correlation of data, it was confirmed that the higher the negative value of news, the more positive content on YouTube, and the positive index of YouTube content is proportional to the positive and negative values on Twitter. As a result of this study, YouTube is not consistent with the emotion index shown in the news due to its secondary processing and affected characteristics. In other words, processed YouTube content intuitively affects Twitter's positive and negative figures, which are channels of communication. The results of this study analyzed that YouTube plays a role in assisting individual discrimination in the current situation where accurate judgment of information has become difficult due to the emergence of yellow media that stimulates people's interests and instincts.

Emotion and Sentiment Analysis from a Film Script: A Case Study (영화 대본에서 감정 및 정서 분석: 사례 연구)

  • Yu, Hye-Yeon;Kim, Moon-Hyun;Bae, Byung-Chull
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1537-1542
    • /
    • 2017
  • Emotion plays a key role in both generating and understanding narrative. In this article we analyzed the emotions represented in a movie script based on 8 emotion types from the wheel of emotions by Plutchik. First we conducted manual emotion tagging scene by scene. The most dominant emotions by manual tagging were anger, fear, and surprise. It makes sense when the film script we analyzed is a thriller-genre. We assumed that the emotions around the climax of the story would be heightened as the tension grew up. From manual tagging we could identify three such duration when the tension is high. Next we analyzed the emotions in the same script using Python-based NLTK VADERSentiment tool. The result showed that the emotions of anger and fear were most matched. The emotion of surprise, anticipation, and disgust, however, scored lower matching.

Construction of Korean FrameNet through Manual Translation of English FrameNet (영어 FrameNet의 수동번역을 통한 한국어 FrameNet 구축 개발)

  • Nam, Sejin;Kim, Youngsik;Park, Jungyeul;Hahm, Younggyun;Hwang, Dosam;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.38-43
    • /
    • 2014
  • 본 논문은, 현존하는 영어 FrameNet 데이터를 기반으로 하여, FrameNet에 대한 전문 지식이 없는 번역가들을 통해 수행할 수 있는 한국어 FrameNet의 수동 구축 개발 과정을 제시한다. 우리 연구팀은 실제로, NLTK가 제공하는 영어 FrameNet 버전 1.5의 Full Text를 이루고 있는 5,945개의 문장들 중에서, Frame 데이터를 가진 4,025개의 문장들을 추출해내어, 번역가들에 의해 한국어로 수동번역 함으로써, 한국어 FrameNet 구축 개발을 향한 의미 있는 초석을 마련하였으며, 제시한 방법의 실효성을 입증하는 연구결과들을 웹에 공개하기도 하였다.

  • PDF

GNI Corpus Version 1.0: Annotated Full-Text Corpus of Genomics & Informatics to Support Biomedical Information Extraction

  • Oh, So-Yeon;Kim, Ji-Hyeon;Kim, Seo-Jin;Nam, Hee-Jo;Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • v.16 no.3
    • /
    • pp.75-77
    • /
    • 2018
  • Genomics & Informatics (NLM title abbreviation: Genomics Inform) is the official journal of the Korea Genome Organization. Text corpus for this journal annotated with various levels of linguistic information would be a valuable resource as the process of information extraction requires syntactic, semantic, and higher levels of natural language processing. In this study, we publish our new corpus called GNI Corpus version 1.0, extracted and annotated from full texts of Genomics & Informatics, with NLTK (Natural Language ToolKit)-based text mining script. The preliminary version of the corpus could be used as a training and testing set of a system that serves a variety of functions for future biomedical text mining.

Extracting and Clustering of Story Events from a Story Corpus

  • Yu, Hye-Yeon;Cheong, Yun-Gyung;Bae, Byung-Chull
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3498-3512
    • /
    • 2021
  • This article describes how events that make up text stories can be represented and extracted. We also address the results from our simple experiment on extracting and clustering events in terms of emotions, under the assumption that different emotional events can be associated with the classified clusters. Each emotion cluster is based on Plutchik's eight basic emotion model, and the attributes of the NLTK-VADER are used for the classification criterion. While comparisons of the results with human raters show less accuracy for certain emotion types, emotion types such as joy and sadness show relatively high accuracy. The evaluation results with NRC Word Emotion Association Lexicon (aka EmoLex) show high accuracy values (more than 90% accuracy in anger, disgust, fear, and surprise), though precision and recall values are relatively low.

Design of Image Generation System for DCGAN-Based Kids' Book Text

  • Cho, Jaehyeon;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1437-1446
    • /
    • 2020
  • For the last few years, smart devices have begun to occupy an essential place in the life of children, by allowing them to access a variety of language activities and books. Various studies are being conducted on using smart devices for education. Our study extracts images and texts from kids' book with smart devices and matches the extracted images and texts to create new images that are not represented in these books. The proposed system will enable the use of smart devices as educational media for children. A deep convolutional generative adversarial network (DCGAN) is used for generating a new image. Three steps are involved in training DCGAN. Firstly, images with 11 titles and 1,164 images on ImageNet are learned. Secondly, Tesseract, an optical character recognition engine, is used to extract images and text from kids' book and classify the text using a morpheme analyzer. Thirdly, the classified word class is matched with the latent vector of the image. The learned DCGAN creates an image associated with the text.