• Title/Summary/Keyword: Text-based classification

Search Result 461, Processing Time 0.026 seconds

Academic Conference Categorization According to Subjects Using Topical Information Extraction from Conference Websites (학회 웹사이트의 토픽 정보추출을 이용한 주제에 따른 학회 자동분류 기법)

  • Lee, Sue Kyoung;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.61-77
    • /
    • 2017
  • Recently, the number of academic conference information on the Internet has rapidly increased, the automatic classification of academic conference information according to research subjects enables researchers to find the related academic conference efficiently. Information provided by most conference listing services is limited to title, date, location, and website URL. However, among these features, the only feature containing topical words is title, which causes information insufficiency problem. Therefore, we propose methods that aim to resolve information insufficiency problem by utilizing web contents. Specifically, the proposed methods the extract main contents from a HTML document collected by using a website URL. Based on the similarity between the title of a conference and its main contents, the topical keywords are selected to enforce the important keywords among the main contents. The experiment results conducted by using a real-world dataset showed that the use of additional information extracted from the conference websites is successful in improving the conference classification performances. We plan to further improve the accuracy of conference classification by considering the structure of websites.

A Study on Hangul Handwriting Generation and Classification Mode for Intelligent OCR System (지능형 OCR 시스템을 위한 한글 필기체 생성 및 분류 모델에 관한 연구)

  • Jin-Seong Baek;Ji-Yun Seo;Sang-Joong Jung;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.222-227
    • /
    • 2022
  • In this paper, we implemented a Korean text generation and classification model based on a deep learning algorithm that can be applied to various industries. It consists of two implemented GAN-based Korean handwriting generation models and CNN-based Korean handwriting classification models. The GAN model consists of a generator model for generating fake Korean handwriting data and a discriminator model for discriminating fake handwritten data. In the case of the CNN model, the model was trained using the 'PHD08' dataset, and the learning result was 92.45. It was confirmed that Korean handwriting was classified with % accuracy. As a result of evaluating the performance of the classification model by integrating the Korean cursive data generated through the implemented GAN model and the training dataset of the existing CNN model, it was confirmed that the classification performance was 96.86%, which was superior to the existing classification performance.

Technology Convergence & Trend Analysis of Biohealth Industry in 5 Countries : Using patent co-classification analysis and text mining (5개국 바이오헬스 산업의 기술융합과 트렌드 분석 : 특허 동시분류분석과 텍스트마이닝을 활용하여)

  • Park, Soo-Hyun;Yun, Young-Mi;Kim, Ho-Yong;Kim, Jae-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.4
    • /
    • pp.9-21
    • /
    • 2021
  • The study aims to identify convergence and trends in technology-based patent data for the biohealth sector in IP5 countries (KR, EP, JP, US, CN) and present the direction of development in that industry. We used patent co-classification analysis-based network analysis and TF-IDF-based text mining as the principal methodology to understand the current state of technology convergence. As a result, the technology convergence cluster in the biohealth industry was derived in three forms: (A) Medical device for treatment, (B) Medical data processing, and (C) Medical device for biometrics. Besides, as a result of trend analysis based on technology convergence results, it is analyzed that Korea is likely to dominate the market with patents with high commercial value in the future as it is derived as a market leader in (B) medical data processing. In particular, the field is expected to require technology convergence activation policies and R&D support strategies for the technology as the possibility of medical data utilization by domestic bio-health companies expands, along with the policy conversion of the "Data 3 Act" passed by the National Assembly in January 2019.

CNN Architecture Predicting Movie Rating from Audience's Reviews Written in Korean (한국어 관객 평가기반 영화 평점 예측 CNN 구조)

  • Kim, Hyungchan;Oh, Heung-Seon;Kim, Duksu
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.1
    • /
    • pp.17-24
    • /
    • 2020
  • In this paper, we present a movie rating prediction architecture based on a convolutional neural network (CNN). Our prediction architecture extends TextCNN, a popular CNN-based architecture for sentence classification, in three aspects. First, character embeddings are utilized to cover many variants of words since reviews are short and not well-written linguistically. Second, the attention mechanism (i.e., squeeze-and-excitation) is adopted to focus on important features. Third, a scoring function is proposed to convert the output of an activation function to a review score in a certain range (1-10). We evaluated our prediction architecture on a movie review dataset and achieved a low MSE (e.g., 3.3841) compared with an existing method. It showed the superiority of our movie rating prediction architecture.

Estimating Media Environments of Fashion Contents through Semantic Network Analysis from Social Network Service of Global SPA Brands (패션콘텐츠 미디어 환경 예측을 위한 해외 SPA 브랜드의 SNS 언어 네트워크 분석)

  • Jun, Yuhsun
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.43 no.3
    • /
    • pp.427-439
    • /
    • 2019
  • This study investigated the semantic network based on the focus of the fashion image and SNS text utilized by global SPA brands on the last seven years in terms of the quantity and quality of data generated by the fast-changing fashion trends and fashion content-based media environment. The research method relocated frequency, density and repetitive key words as well as visualized algorithms using the UCINET 6.347 program and the overall classification of the text related to fashion images on social networks used by global SPA brands. The conclusions of the study are as follows. A common aspect of global SPA brands is that by looking at the basis of text extraction on SNS, exposure through image of products is considered important for sales. The following is a discriminatory aspect of global SPA brands. First, ZARA consistently exposes marketing using a variety of professions and nationalities to SNS. Second, UNIQLO's correlation exposes its collaboration promotion to SNS while steadily exposing basic items. Third, in the case of H&M, some discriminatory results were found with other brands in connectivity with each cluster category that showed remarkably independent results.

EFTG: Efficient and Flexible Top-K Geo-textual Publish/Subscribe

  • zhu, Hong;Li, Hongbo;Cui, Zongmin;Cao, Zhongsheng;Xie, Meiyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5877-5897
    • /
    • 2018
  • With the popularity of mobile networks and smartphones, geo-textual publish/subscribe messaging has attracted wide attention. Different from the traditional publish/subscribe format, geo-textual data is published and subscribed in the form of dynamic data flow in the mobile network. The difference creates more requirements for efficiency and flexibility. However, most of the existing Top-k geo-textual publish/subscribe schemes have the following deficiencies: (1) All publications have to be scored for each subscription, which is not efficient enough. (2) A user should take time to set a threshold for each subscription, which is not flexible enough. Therefore, we propose an efficient and flexible Top-k geo-textual publish/subscribe scheme. First, our scheme groups publish and subscribe based on text classification. Thus, only a few parts of related publications should be scored for each subscription, which significantly enhances efficiency. Second, our scheme proposes an adaptive publish/subscribe matching algorithm. The algorithm does not require the user to set a threshold. It can adaptively return Top-k results to the user for each subscription, which significantly enhances flexibility. Finally, theoretical analysis and experimental evaluation verify the efficiency and effectiveness of our scheme.

A Study on Effective Sentiment Analysis through News Classification in Bankruptcy Prediction Model (부도예측 모형에서 뉴스 분류를 통한 효과적인 감성분석에 관한 연구)

  • Kim, Chansong;Shin, Minsoo
    • Journal of Information Technology Services
    • /
    • v.18 no.1
    • /
    • pp.187-200
    • /
    • 2019
  • Bankruptcy prediction model is an issue that has consistently interested in various fields. Recently, as technology for dealing with unstructured data has been developed, researches applied to business model prediction through text mining have been activated, and studies using this method are also increasing in bankruptcy prediction. Especially, it is actively trying to improve bankruptcy prediction by analyzing news data dealing with the external environment of the corporation. However, there has been a lack of study on which news is effective in bankruptcy prediction in real-time mass-produced news. The purpose of this study was to evaluate the high impact news on bankruptcy prediction. Therefore, we classify news according to type, collection period, and analyzed the impact on bankruptcy prediction based on sentiment analysis. As a result, artificial neural network was most effective among the algorithms used, and commentary news type was most effective in bankruptcy prediction. Column and straight type news were also significant, but photo type news was not significant. In the news by collection period, news for 4 months before the bankruptcy was most effective in bankruptcy prediction. In this study, we propose a news classification methods for sentiment analysis that is effective for bankruptcy prediction model.

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

Classification Techniques for XML Document Using Text Mining (텍스트 마이닝을 이용한 XML 문서 분류 기술)

  • Kim Cheon-Shik;Hong You-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.15-23
    • /
    • 2006
  • Millions of documents are already on the Internet, and new documents are being formed all the time. This poses a very important problem in the management and querying of documents to classify them on the Internet by the most suitable means. However, most users have been using the document classification method based on a keyword. This method does not classify documents efficiently, and there is a weakness in the category of document that includes meaning. Document classification by a person can be very correct sometimes and often times is required. Therefore, in this paper, We wish to classify documents by using a neural network algorithm and C4.5 algorithms. We used resume data forming by XML for a document classification experiment. The result showed excellent possibilities in the document category. Therefore, We expect an applicable solution for various document classification problems.

  • PDF

An effective approach to generate Wikipedia infobox of movie domain using semi-structured data

  • Bhuiyan, Hanif;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.49-61
    • /
    • 2017
  • Wikipedia infoboxes have emerged as an important structured information source on the web. To compose infobox for an article, considerable amount of manual effort is required from an author. Due to this manual involvement, infobox suffers from inconsistency, data heterogeneity, incompleteness, schema drift etc. Prior works attempted to solve those problems by generating infobox automatically based on the corresponding article text. However, there are many articles in Wikipedia that do not have enough text content to generate infobox. In this paper, we present an automated approach to generate infobox for movie domain of Wikipedia by extracting information from several sources of the web instead of relying on article text only. The proposed methodology has been developed using semantic relations of article content and available semi-structured information of the web. It processes the article text through some classification processes to identify the template from the large pool of template list. Finally, it extracts the information for the corresponding template attributes from web and thus generates infobox. Through a comprehensive experimental evaluation the proposed scheme was demonstrated as an effective and efficient approach to generate Wikipedia infobox.