• 제목/요약/키워드: text features

검색결과 577건 처리시간 0.026초

Text Classification for Patents: Experiments with Unigrams, Bigrams and Different Weighting Methods

  • Im, ChanJong;Kim, DoWan;Mandl, Thomas
    • International Journal of Contents
    • /
    • 제13권2호
    • /
    • pp.66-74
    • /
    • 2017
  • Patent classification is becoming more critical as patent filings have been increasing over the years. Despite comprehensive studies in the area, there remain several issues in classifying patents on IPC hierarchical levels. Not only structural complexity but also shortage of patents in the lower level of the hierarchy causes the decline in classification performance. Therefore, we propose a new method of classification based on different criteria that are categories defined by the domain's experts mentioned in trend analysis reports, i.e. Patent Landscape Report (PLR). Several experiments were conducted with the purpose of identifying type of features and weighting methods that lead to the best classification performance using Support Vector Machine (SVM). Two types of features (noun and noun phrases) and five different weighting schemes (TF-idf, TF-rf, TF-icf, TF-icf-based, and TF-idcef-based) were experimented on.

Text-independent Speaker Identification Using Soft Bag-of-Words Feature Representation

  • Jiang, Shuangshuang;Frigui, Hichem;Calhoun, Aaron W.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.240-248
    • /
    • 2014
  • We present a robust speaker identification algorithm that uses novel features based on soft bag-of-word representation and a simple Naive Bayes classifier. The bag-of-words (BoW) based histogram feature descriptor is typically constructed by summarizing and identifying representative prototypes from low-level spectral features extracted from training data. In this paper, we define a generalization of the standard BoW. In particular, we define three types of BoW that are based on crisp voting, fuzzy memberships, and possibilistic memberships. We analyze our mapping with three common classifiers: Naive Bayes classifier (NB); K-nearest neighbor classifier (KNN); and support vector machines (SVM). The proposed algorithms are evaluated using large datasets that simulate medical crises. We show that the proposed soft bag-of-words feature representation approach achieves a significant improvement when compared to the state-of-art methods.

Document Classification Model Using Web Documents for Balancing Training Corpus Size per Category

  • Park, So-Young;Chang, Juno;Kihl, Taesuk
    • Journal of information and communication convergence engineering
    • /
    • 제11권4호
    • /
    • pp.268-273
    • /
    • 2013
  • In this paper, we propose a document classification model using Web documents as a part of the training corpus in order to resolve the imbalance of the training corpus size per category. For the purpose of retrieving the Web documents closely related to each category, the proposed document classification model calculates the matching score between word features and each category, and generates a Web search query by combining the higher-ranked word features and the category title. Then, the proposed document classification model sends each combined query to the open application programming interface of the Web search engine, and receives the snippet results retrieved from the Web search engine. Finally, the proposed document classification model adds these snippet results as Web documents to the training corpus. Experimental results show that the method that considers the balance of the training corpus size per category exhibits better performance in some categories with small training sets.

문서 입출력 시스템의 구성에 관한 연구 (A Study on the Construction of a Document Input/Output system)

  • 함영국;도상윤;정홍규;김우성;박래홍;이창범;김상중
    • 전자공학회논문지B
    • /
    • 제29B권10호
    • /
    • pp.100-112
    • /
    • 1992
  • In this paper, an integrated document input/output system is developed which constructs the graphic document from a text file, converts the document into encoded facsimile data, and also recognizes printed/handwritten alphanumerics and Korean characters in a facsimile or graphic document. For an output system, we develop the method which generates bit-map patterns from the document consisting of the KSC5601 and ASCII codes. The binary graphic image, if necessary, is encoded by the G3 coding scheme for facsimile transmission. For a user friendly input system for documents consisting of alphanumerics and Korean characters obtained from a facsimile or scanner, we propose a document recognition algirithm utilizing several special features(partial projection, cross point, and distance features) and the membership function of the fuzzy set theory. In summary, we develop an integrated document input/output system and its performance is demonstrated via computer simulation.

  • PDF

Implementation of Multi-Precision Multiplication over Sensor Networks with Efficient Instructions

  • Seo, Hwajeong;Kim, Howon
    • Journal of information and communication convergence engineering
    • /
    • 제11권1호
    • /
    • pp.12-16
    • /
    • 2013
  • Sensor network is one of the strongest technologies for various applications including home automation, surveillance system and monitoring system. To ensure secure and robust network communication between sensor nodes, plain-text should be encrypted using encryption methods. However due to their limited computation power and storage, it is difficult to implement public key cryptography, including elliptic curve cryptography, RSA and pairing cryptography, on sensor networks. However, recent works have shown the possibility that public key cryptography could be made available in a sensor network environment by introducing the efficient multi-precision multiplication method. The previous method suggested a broad rule of multiplication to enhance performance. However, various features of sensor motes have not been considered. For optimized implementation, unique features should be handled. In this paper, we propose a fully optimized multiplication method depending on a different specification for sensor motes. The method improves performance by using more efficient instructions and general purpose registers.

Towards Effective Entity Extraction of Scientific Documents using Discriminative Linguistic Features

  • Hwang, Sangwon;Hong, Jang-Eui;Nam, Young-Kwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권3호
    • /
    • pp.1639-1658
    • /
    • 2019
  • Named entity recognition (NER) is an important technique for improving the performance of data mining and big data analytics. In previous studies, NER systems have been employed to identify named-entities using statistical methods based on prior information or linguistic features; however, such methods are limited in that they are unable to recognize unregistered or unlearned objects. In this paper, a method is proposed to extract objects, such as technologies, theories, or person names, by analyzing the collocation relationship between certain words that simultaneously appear around specific words in the abstracts of academic journals. The method is executed as follows. First, the data is preprocessed using data cleaning and sentence detection to separate the text into single sentences. Then, part-of-speech (POS) tagging is applied to the individual sentences. After this, the appearance and collocation information of the other POS tags is analyzed, excluding the entity candidates, such as nouns. Finally, an entity recognition model is created based on analyzing and classifying the information in the sentences.

Investigating Predictive Features for Authorship Verification of Arabic Tweets

  • Alqahtani, Fatimah;Dohler, Mischa
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.115-126
    • /
    • 2022
  • The goal of this research is to look into different techniques to solve the problem of authorship verification for Arabic short writings. Despite the widespread usage of Twitter among Arabs, short text research has so far focused on authorship verification in languages other than Arabic, such as English, Spanish, and Greek. To the best of the researcher's knowledge, no study has looked into the task of verifying Arabic-language Twitter texts. The impact of Stylometric and TF-IDF features of very brief texts (Arabic Twitter postings) on user verification was explored in this study. In addition, an analytical analysis was done to see how meta-data from Twitter tweets, such as time and source, can help to verify users perform better. This research is significant on the subject of cyber security in Arabic countries.

CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석 (Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode)

  • 박호연;김경재
    • 지능정보연구
    • /
    • 제25권4호
    • /
    • pp.141-154
    • /
    • 2019
  • 인터넷 기술과 소셜 미디어의 빠른 성장으로 인하여, 구조화되지 않은 문서 표현도 다양한 응용 프로그램에 사용할 수 있게 마이닝 기술이 발전되었다. 그 중 감성분석은 제품이나 서비스에 내재된 사용자의 감성을 탐지할 수 있는 분석방법이기 때문에 지난 몇 년 동안 많은 관심을 받아왔다. 감성분석에서는 주로 텍스트 데이터를 이용하여 사람들의 감성을 사전 정의된 긍정 및 부정의 범주를 할당하여 분석하며, 이때 사전 정의된 레이블을 이용하기 때문에 다양한 방향으로 연구가 진행되고 있다. 초기의 감성분석 연구에서는 쇼핑몰 상품의 리뷰 중심으로 진행되었지만, 최근에는 블로그, 뉴스기사, 날씨 예보, 영화 리뷰, SNS, 주식시장의 동향 등 다양한 분야에 적용되고 있다. 많은 선행연구들이 진행되어 왔으나 대부분 전통적인 단일 기계학습기법에 의존한 감성분류를 시도하였기에 분류 정확도 면에서 한계점이 있었다. 본 연구에서는 전통적인 기계학습기법 대신 대용량 데이터의 처리에 우수한 성능을 보이는 딥러닝 기법과 딥러닝 중 CNN과 LSTM의 조합모델을 이용하여 감성분석의 분류 정확도를 개선하고자 한다. 본 연구에서는 대표적인 영화 리뷰 데이터셋인 IMDB의 리뷰 데이터 셋을 이용하여, 감성분석의 극성분석을 긍정 및 부정으로 범주를 분류하고, 딥러닝과 제안하는 조합모델을 활용하여 극성분석의 예측 정확도를 개선하는 것을 목적으로 한다. 이 과정에서 여러 매개 변수가 존재하기 때문에 그 수치와 정밀도의 관계에 대해 고찰하여 최적의 조합을 찾아 정확도 등 감성분석의 성능 개선을 시도한다. 연구 결과, 딥러닝 기반의 분류 모형이 좋은 분류성과를 보였으며, 특히 본 연구에서 제안하는 CNN-LSTM 조합모델의 성과가 가장 우수한 것으로 나타났다.

언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델 (Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model)

  • 김동환
    • 지능정보연구
    • /
    • 제29권1호
    • /
    • pp.27-40
    • /
    • 2023
  • 스마트 기기의 보급률 증가와 더불어 코로나의 영향으로 스마트 기기를 통한 미디어 콘텐츠의 소비가 크게 늘어나고 있다. 이러한 추세와 더불어 OTT 플랫폼을 통한 미디어 콘텐츠의 시청과 콘텐츠의 양이 늘어나고 있어서 해당 플랫폼에서의 콘텐츠 추천이 중요해지고 있다. 콘텐츠 기반 추천 관련 기존 연구들은 콘텐츠의 특징을 가리키는 메타 데이터를 활용하는 경우가 대부분이었고 콘텐츠 자체의 내용적인 메타 데이터를 활용하는 경우는 부족한 상황이다. 이에 따라 본 논문은 콘텐츠의 내용적인 부분을 설명하는 제목과 시놉시스를 포함한 다양한 텍스트 데이터를 바탕으로 유사한 콘텐츠를 추천하고자 하였다. 텍스트 데이터를 학습하기 위한 모델은 한국어 언어모델 중에 성능이 우수한 KLUE-RoBERTa-large를 활용하였다. 학습 데이터는 콘텐츠 제목, 시놉시스, 복합 장르, 감독, 배우, 해시 태그 정보를 포함하는 2만여건의 콘텐츠 메타 데이터를 사용하였으며 정형 데이터로 구분되어 있는 여러 텍스트 피처를 입력하기 위해 해당 피처를 가리키는 스페셜 토큰으로 텍스트 피처들을 이어붙여서 언어모델에 입력하였다. 콘텐츠들 간에 3자 비교를 하는 방식과 테스트셋 레이블링에 다중 검수를 적용하여 모델의 유사도 분류 능력을 점검하는 테스트셋의 상대성과 객관성을 도모하였다. 콘텐츠 메타 텍스트 데이터에 대한 임베딩을 파인튜닝 학습하기 위해 장르 분류와 해시태그 분류 예측 태스크로 실험하였다. 결과적으로 해시태그 분류 모델이 유사도 테스트셋 기준으로 90%이상의 정확도를 보였고 기본 언어모델 대비 9% 이상 향상되었다. 해시태그 분류 학습을 통해 언어모델의 유사 콘텐츠 분류 능력이 향상됨을 알 수 있었고 콘텐츠 기반 필터링을 위한 언어모델의 활용 가치를 보여주었다.

효율적인 문서 자동 분류를 위한 대표 색인어 추출 기법 (A Feature Selection Technique for an Efficient Document Automatic Classification)

  • 김지숙;김영지;문현정;우용태
    • 정보기술과데이타베이스저널
    • /
    • 제8권1호
    • /
    • pp.117-128
    • /
    • 2001
  • Recently there are many researches of text mining to find interesting patterns or association rules from mass textual documents. However, the words extracted from informal documents are tend to be irregular and there are too many general words, so if we use pre-exist method, we would have difficulty in retrieving knowledge information effectively. In this paper, we propose a new feature extraction method to classify mass documents using association rule based on unsupervised learning technique. In experiment, we show the efficiency of suggested method by extracting features and classifying of documents.

  • PDF