• 제목/요약/키워드: Extract Keywords

검색결과 127건 처리시간 0.025초

SNS를 이용한 잠재적 광고 키워드 추출 시스템 설계 및 구현 (Design and Implementation of Potential Advertisement Keyword Extraction System Using SNS)

  • 서현곤;박희완
    • 한국융합학회논문지
    • /
    • 제9권7호
    • /
    • pp.17-24
    • /
    • 2018
  • 빅데이터 처리 분야에서 중요한 이슈 중 하나는 인터넷의 주요 키워드를 추출하고 이것을 이용하여 필요한 정보를 가공하는 것이다. 현재까지 제안된 대부분의 키워드 추출 방법들은 대형 포털 사이트의 검색기능을 기반으로 이미 게시된 글이나 작성된 문서 또는 고정된 내용에 기반하고 있다. 본 논문에서는 SNS에 게시되는 다양한 이슈, 대화, 관심 분야, 의견 등 동적인 메시지를 기반으로 이슈 키워드 및 연관 키워드를 추출하여 잠재적 쇼핑 연관 키워드 광고 마케팅에 도움을 주는 시스템(KAES: Keyword Advertisement Extraction System based on SNS)을 개발한다. KAES 시스템은 특정 계정 리스트를 작성하여 SNS에서 빈도수가 가장 많은 핵심 키워드 및 연관 키워드를 추출한다.

Design and Implementation of Web Crawler with Real-Time Keyword Extraction based on the RAKE Algorithm

  • Zhang, Fei;Jang, Sunggyun;Joe, Inwhee
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2017년도 추계학술발표대회
    • /
    • pp.395-398
    • /
    • 2017
  • We propose a web crawler system with keyword extraction function in this paper. Researches on the keyword extraction in existing text mining are mostly based on databases which have already been grabbed by documents or corpora, but the purpose of this paper is to establish a real-time keyword extraction system which can extract the keywords of the corresponding text and store them into the database together while grasping the text of the web page. In this paper, we design and implement a crawler combining RAKE keyword extraction algorithm. It can extract keywords from the corresponding content while grasping the content of web page. As a result, the performance of the RAKE algorithm is improved by increasing the weight of the important features (such as the noun appearing in the title). The experimental results show that this method is superior to the existing method and it can extract keywords satisfactorily.

인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법 (A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns)

  • 김민규;김남규;정인환
    • 지능정보연구
    • /
    • 제20권2호
    • /
    • pp.123-136
    • /
    • 2014
  • 최근 온라인 및 다양한 스마트 기기의 사용이 확산됨에 따라 온라인을 통한 쇼핑구매가 더욱 활성화 되었다. 때문에 인터넷 쇼핑몰들은 쇼핑에 관심이 있는 잠재 고객들에게 한 번이라도 더 자사의 링크를 노출시키기 위해 키워드에 비용을 지불할 용의가 있으며, 이러한 추세는 검색 광고 시장의 광고비를 증가시키는 원인을 제공하였다. 이 때 키워드의 가치는 대체로 검색어의 빈도수에 기반을 두어 산정된다. 하지만 포털 사이트에서 검색어로 자주 입력되는 모든 단어가 쇼핑과 관련이 있는 것은 아니며, 이들 키워드 중에는 빈도수는 높지만 쇼핑몰 관점에서는 별로 수익과 관련이 없는 키워드도 다수 존재한다. 그렇기 때문에 특정 키워드가 사용자들에게 많이 노출된다고 해서, 이를 통해 구매가 이루어질 것을 기대하여 해당 키워드에 많은 광고비를 지급하는 것은 매우 비효율적인 방식이다. 따라서 포털 사이트의 빈발 검색어 중 쇼핑몰 관점에서 중요한 키워드를 추출하는 작업이 별도로 요구되며, 이 과정을 빠르고 효과적으로 수행하기 위한 자동화 방법론에 대한 수요가 증가하고 있다. 본 연구에서는 이러한 수요에 부응하기 위해 포털 사이트에 입력된 키워드 중 쇼핑의도를 포함하고 있을 가능성이 높을 것으로 추정되는 키워드만을 자동으로 추출하는 방안을 제시하고, 구체적으로는 전체 검색어 중 검색결과 페이지에서 쇼핑과 관련 된 페이지로 이동한 검색어만을 추출하여 순위를 집계하고, 이 순위를 전체 검색 키워드의 순위와 비교하였다. 국내 최대의 검색 포털인 'N'사에서 이루어진 검색 약 390만 건에 대한 실험결과, 제안 방법론에 의해 추천된 쇼핑의도 포함 키워드가 단순 빈도수 기반의 키워드에 비해 정확도, 재현율, F-Score의 모든 측면에서 상대적으로 우수한 성능을 보이는 것으로 나타남을 확인할 수 있었다.

2018년부터 2021년까지 대한안전경영과학회지의 주제어에 관한 분석 (An Analysis on Keywords in the Journal of Korean Safety Management Science from 2018 to 2021)

  • 양병학
    • 대한안전경영과학회지
    • /
    • 제25권1호
    • /
    • pp.1-6
    • /
    • 2023
  • This study tried to analyze the keywords of the papers published in the Korea Safety Management Science by using the social network analysis. In order to extract the keywords, information on journal articles published from 2018 to 2021 was extracted from the SCIENCE ON. Among the keywords extracted from a total of 129 papers, the keywords with similar meanings were standardized. The keywords used in the same paper were visualized by connecting them through a network. Four centrality indicators of the social network analysis were used to analyze the effect of the keyword. Safety, Safety management, Apartment, Fire hose, SMEs, Virtual reality, Machine learning, Waterproof time, R&D capability, and Job crafting were selected as the keywords analyzed with high influence in the four centrality indicators.

토픽 식별성 향상을 위한 키워드 재구성 기법 (Keyword Reorganization Techniques for Improving the Identifiability of Topics)

  • 윤여일;김남규
    • 한국IT서비스학회지
    • /
    • 제18권4호
    • /
    • pp.135-149
    • /
    • 2019
  • Recently, there are many researches for extracting meaningful information from large amount of text data. Among various applications to extract information from text, topic modeling which express latent topics as a group of keywords is mainly used. Topic modeling presents several topic keywords by term/topic weight and the quality of those keywords are usually evaluated through coherence which implies the similarity of those keywords. However, the topic quality evaluation method based only on the similarity of keywords has its limitations because it is difficult to describe the content of a topic accurately enough with just a set of similar words. In this research, therefore, we propose topic keywords reorganizing method to improve the identifiability of topics. To reorganize topic keywords, each document first needs to be labeled with one representative topic which can be extracted from traditional topic modeling. After that, classification rules for classifying each document into a corresponding label are generated, and new topic keywords are extracted based on the classification rules. To evaluated the performance our method, we performed an experiment on 1,000 news articles. From the experiment, we confirmed that the keywords extracted from our proposed method have better identifiability than traditional topic keywords.

퍼지 추론을 이용한 소수 문서의 대표 키워드 추출 (Representative Keyword Extraction from Few Documents through Fuzzy Inference)

  • 노순억;김병만;허남철
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2001년도 추계학술대회 학술발표 논문집
    • /
    • pp.117-120
    • /
    • 2001
  • In this work, we propose a new method of extracting and weighting representative keywords(RKs) from a few documents that might interest a user. In order to extract RKs, we first extract candidate terms and then choose a number of terms called initial representative keywords (IRKS) from them through fuzzy inference. Then, by expanding and reweighting IRKS using term co-occurrence similarity, the final RKs are obtained. Performance of our approach is heavily influenced by effectiveness of selection method of IRKS so that we choose fuzzy inference because it is more effective in handling the uncertainty inherent in selecting representative keywords of documents. The problem addressed in this paper can be viewed as the one of calculating center of document vectors. So, to show the usefulness of our approach, we compare with two famous methods - Rocchio and Widrow-Hoff - on a number of documents collections. The results show that our approach outperforms the other approaches.

  • PDF

자연어 처리 기법을 활용한 산업재해 위험요인 구조화 (Structuring Risk Factors of Industrial Incidents Using Natural Language Process)

  • 강성식;장성록;이종빈;서용윤
    • 한국안전학회지
    • /
    • 제36권1호
    • /
    • pp.56-63
    • /
    • 2021
  • The narrative texts of industrial accident reports help to identify accident risk factors. They relate the accident triggers to the sequence of events and the outcomes of an accident. Particularly, a set of related keywords in the context of the narrative can represent how the accident proceeded. Previous studies on text analytics for structuring accident reports have been limited to extracting individual keywords without context. We proposed a context-based analysis using a Natural Language Processing (NLP) algorithm to remedy this shortcoming. This study aims to apply Word2Vec of the NLP algorithm to extract adjacent keywords, known as word embedding, conducted by the neural network algorithm based on supervised learning. During processing, Word2Vec is conducted by adjacent keywords in narrative texts as inputs to achieve its supervised learning; keyword weights emerge as the vectors representing the degree of neighboring among keywords. Similar keyword weights mean that the keywords are closely arranged within sentences in the narrative text. Consequently, a set of keywords that have similar weights presents similar accidents. We extracted ten accident processes containing related keywords and used them to understand the risk factors determining how an accident proceeds. This information helps identify how a checklist for an accident report should be structured.

Deep Learning Document Analysis System Based on Keyword Frequency and Section Centrality Analysis

  • Lee, Jongwon;Wu, Guanchen;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • 제19권1호
    • /
    • pp.48-53
    • /
    • 2021
  • Herein, we propose a document analysis system that analyzes papers or reports transformed into XML(Extensible Markup Language) format. It reads the document specified by the user, extracts keywords from the document, and compares the frequency of keywords to extract the top-three keywords. It maintains the order of the paragraphs containing the keywords and removes duplicated paragraphs. The frequency of the top-three keywords in the extracted paragraphs is re-verified, and the paragraphs are partitioned into 10 sections. Subsequently, the importance of the relevant areas is calculated and compared. By notifying the user of areas with the highest frequency and areas with higher importance than the average frequency, the user can read only the main content without reading all the contents. In addition, the number of paragraphs extracted through the deep learning model and the number of paragraphs in a section of high importance are predicted.

연관규칙 분석을 통한 ESG 우려사안 키워드 도출에 관한 연구 (A Study on the Keyword Extraction for ESG Controversies Through Association Rule Mining)

  • 안태욱;이희승;이준서
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제30권1호
    • /
    • pp.123-149
    • /
    • 2021
  • Purpose The purpose of this study is to define the anti-ESG activities of companies recognized by media by reflecting ESG recently attracted attention. This study extracts keywords for ESG controversies through association rule mining. Design/methodology/approach A research framework is designed to extract keywords for ESG controversies as follows: 1) From DeepSearch DB, we collect 23,837 articles on anti-ESG activities exposed to 130 media from 2013 to 2018 of 294 listed companies with ESG ratings 2) We set keywords related to environment, social, and governance, and delete or merge them with other keywords based on the support, confidence, and lift derived from association rule mining. 3) We illustrate the importance of keywords and the relevance between keywords through density, degree centrality, and closeness centrality on network analysis. Findings We identify a total of 26 keywords for ESG controversies. 'Gapjil' records the highest frequency, followed by 'corruption', 'bribery', and 'collusion'. Out of the 26 keywords, 16 are related to governance, 8 to social, and 2 to environment. The keywords ranked high are mostly related to the responsibility of shareholders within corporate governance. ESG controversies associated with social issues are often related to unfair trade. As a result of confidence analysis, the keywords related to social and governance are clustered and the probability of mutual occurrence between keywords is high within each group. In particular, in the case of "owner's arrest", it is caused by "bribery" and "misappropriation" with an 80% confidence level. The result of network analysis shows that 'corruption' is located in the center, which is the most likely to occur alone, and is highly related to 'breach of duty', 'embezzlement', and 'bribery'.

특허 문서로부터 키워드 추출을 위한 위한 텍스트 마이닝 기반 그래프 모델 (Text-mining Based Graph Model for Keyword Extraction from Patent Documents)

  • 이순근;임영문;엄완섭
    • 대한안전경영과학회지
    • /
    • 제17권4호
    • /
    • pp.335-342
    • /
    • 2015
  • The increasing interests on patents have led many individuals and companies to apply for many patents in various areas. Applied patents are stored in the forms of electronic documents. The search and categorization for these documents are issues of major fields in data mining. Especially, the keyword extraction by which we retrieve the representative keywords is important. Most of techniques for it is based on vector space model. But this model is simply based on frequency of terms in documents, gives them weights based on their frequency and selects the keywords according to the order of weights. However, this model has the limit that it cannot reflect the relations between keywords. This paper proposes the advanced way to extract the more representative keywords by overcoming this limit. In this way, the proposed model firstly prepares the candidate set using the vector model, then makes the graph which represents the relation in the pair of candidate keywords in the set and selects the keywords based on this relationship graph.