• Title/Summary/Keyword: Keyword Weight

Search Result 59, Processing Time 0.038 seconds

Method of Related Document Recommendation with Similarity and Weight of Keyword (키워드의 유사도와 가중치를 적용한 연관 문서 추천 방법)

  • Lim, Myung Jin;Kim, Jae Hyun;Shin, Ju Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1313-1323
    • /
    • 2019
  • With the development of the Internet and the increase of smart phones, various services considering user convenience are increasing, so that users can check news in real time anytime and anywhere. However, online news is categorized by media and category, and it provides only a few related search terms, making it difficult to find related news related to keywords. In order to solve this problem, we propose a method to recommend related documents more accurately by applying Doc2Vec similarity to the specific keywords of news articles and weighting the title and contents of news articles. We collect news articles from Naver politics category by web crawling in Java environment, preprocess them, extract topics using LDA modeling, and find similarities using Doc2Vec. To supplement Doc2Vec, we apply TF-IDF to obtain TC(Title Contents) weights for the title and contents of news articles. Then we combine Doc2Vec similarity and TC weight to generate TC weight-similarity and evaluate the similarity between words using PMI technique to confirm the keyword association.

Keyword Weight based Paragraph Extraction Algorithm (키워드 가중치 기반 문단 추출 알고리즘)

  • Lee, Jongwon;Joo, Sangwoong;Lee, Hyunju;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.504-505
    • /
    • 2017
  • Existing morpheme analyzers classify the words used in writing documents. A system for extracting sentences and paragraphs based on a morpheme analyzer is being developed. However, there are very few systems that compress documents and extract important paragraphs. The algorithm proposed in this paper calculates the weights of the keyword written in the document and extracts the paragraphs containing the keyword. Users can reduce the time to understand the document by reading the paragraphs containing the keyword without reading the entire document. In addition, since the number of extracted paragraphs differs according to the number of keyword used in the search, the user can search various patterns compared to the existing system.

  • PDF

Keyword Analysis Based Document Compression System

  • Cao, Kerang;Lee, Jongwon;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.1
    • /
    • pp.48-51
    • /
    • 2018
  • The traditional documents analysis was centered on words based system was implemented using a morpheme analyzer. These traditional systems can classify used words in the document but, cannot help to user's document understanding or analysis. In this problem solved, System needs extract for most valuable paragraphs what can help to user understanding documents. In this paper, we propose system extracts paragraphs of normalized XML document. User insert to system what filename when wants for analyze XML document. Then, system is search for keyword of the document. And system shows results searched keyword. When user choice and inserts keyword for user wants then, extracting for paragraph including keyword. After extracting paragraph, system operating maintenance paragraph sequence and check duplication. If exist duplication then, system deletes paragraph of duplication. And system informs result to user what counting each keyword frequency and weight to user, sorted paragraphs.

XML Document Keyword Weight Analysis based Paragraph Extraction Model (XML 문서 키워드 가중치 분석 기반 문단 추출 모델)

  • Lee, Jongwon;Kang, Inshik;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2133-2138
    • /
    • 2017
  • The analysis of existing XML documents and other documents was centered on words. It can be implemented using a morpheme analyzer, but it can classify many words in the document and cannot grasp the core contents of the document. In order for a user to efficiently understand a document, a paragraph containing a main word must be extracted and presented to the user. The proposed system retrieves keyword in the normalized XML document. Then, the user extracts the paragraphs containing the keyword inputted for searching and displays them to the user. In addition, the frequency and weight of the keyword used in the search are informed to the user, and the order of the extracted paragraphs and the redundancy elimination function are minimized so that the user can understand the document. The proposed system can minimize the time and effort required to understand the document by allowing the user to understand the document without reading the whole document.

Design and Implementation of Web Crawler with Real-Time Keyword Extraction based on the RAKE Algorithm

  • Zhang, Fei;Jang, Sunggyun;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.395-398
    • /
    • 2017
  • We propose a web crawler system with keyword extraction function in this paper. Researches on the keyword extraction in existing text mining are mostly based on databases which have already been grabbed by documents or corpora, but the purpose of this paper is to establish a real-time keyword extraction system which can extract the keywords of the corresponding text and store them into the database together while grasping the text of the web page. In this paper, we design and implement a crawler combining RAKE keyword extraction algorithm. It can extract keywords from the corresponding content while grasping the content of web page. As a result, the performance of the RAKE algorithm is improved by increasing the weight of the important features (such as the noun appearing in the title). The experimental results show that this method is superior to the existing method and it can extract keywords satisfactorily.

Customized Web Search Rank Provision (개인화된 웹 검색 순위 생성)

  • Kang, Youngki;Bae, Joonsoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.2
    • /
    • pp.119-128
    • /
    • 2013
  • Most internet users utilize internet portal search engines, such as Naver, Daum and Google nowadays. But since the results of internet portal search engines are based on universal criteria (e.g. search frequency by region or country), they do not consider personal interests. Namely, current search engines do not provide exact search results for homonym or polysemy because they try to serve universal users. In order to solve this problem, this research determines keyword importance and weight value for each individual search characteristics by collecting and analyzing customized keyword at external database. The customized keyword weight values are integrated with search engine results (e.g. PageRank), and the search ranks are rearranged. Using 50 web pages of Goolge search results for experiment and 6 web pages for customized keyword collection, the new customized search results are proved to be 90% match. Our personalization approach is not the way that users enter preference directly, but the way that system automatically collects and analyzes personal information and then reflects them for customized search results.

A Study of Keyword Spotting System Based on the Weight of Non-Keyword Model (비핵심어 모델의 가중치 기반 핵심어 검출 성능 향상에 관한 연구)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.381-388
    • /
    • 2003
  • This paper presents a method of giving weights to garbage class clustering and Filler model to improve performance of keyword spotting system and a time-saving method of dialogue speech processing system for keyword spotting by calculating keyword transition probability through speech analysis of task domain users. The point of the method is grouping phonemes with phonetic similarities, which is effective in sensing similar phoneme groups rather than individual phonemes, and the paper aims to suggest five groups of phonemes obtained from the analysis of speech sentences in use in Korean morphology and in stock-trading speech processing system. Besides, task-subject Filler model weights are added to the phoneme groups, and keyword transition probability included in consecutive speech sentences is calculated and applied to the system in order to save time for system processing. To evaluate performance of the suggested system, corpus of 4,970 sentences was built to be used in task domains and a test was conducted with subjects of five people in their twenties and thirties. As a result, FOM with the weights on proposed five phoneme groups accounts for 85%, which has better performance than seven phoneme groups of Yapanel [1] with 88.5% and a little bit poorer performance than LVCSR with 89.8%. Even in calculation time, FOM reaches 0.70 seconds than 0.72 of seven phoneme groups. Lastly, it is also confirmed in a time-saving test that time is saved by 0.04 to 0.07 seconds when keyword transition probability is applied.

A Network Analysis of Authors and Keywords from North Korean Traditional Medicine Journal, Koryo Medicine (북한 고려의학 학술 저널에 대한 저자 및 키워드 네트워크 분석)

  • Oh, Junho;Yi, Eunhee;Lee, Juyeon;Kim, Dongsu
    • Journal of Society of Preventive Korean Medicine
    • /
    • v.25 no.2
    • /
    • pp.33-43
    • /
    • 2021
  • Objectives : This study seeks to grasp the current status of Koryo medical research in North Korea, by focusing on researchers and research topics. Methods : A network analysis of co-authors and keyword which were extracted from Koryo Medicine - a North Korean traditional medicine journal, was conducted. Results : The results of author network analysis was a sparse network due to the low correlation between authors. The domain-wide network density of co-authors was 0.001, with a diameter of 14, average distance between nodes 4.029, and average binding coefficient 0.029. The results of the keyword network analysis showed the keyword "traditional medicine" had the strongest correlation weight of 228. Other keywords with high correlation weight was common acupuncture (84) and intradermal acupuncture(80). Conclusions : Although the co-authors of the Koryo Medicine did not have a high correlation with each other, they were able to identify key researchers considered important for each major sub-network. In addition, the keywords of the Koryo Medicine journals had a very high linkage to herbal medicines.

Social network analysis of keyword community network in IoT patent data (키워드 커뮤니티 네트워크의 소셜 네트워크 분석을 이용한 사물 인터넷 특허 분석)

  • Kim, Do Hyun;Kim, Hyon Hee;Kim, Donggeon;Jo, Jinnam
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.4
    • /
    • pp.719-728
    • /
    • 2016
  • In this paper, we analyzed IoT patent data using the social network analysis of keyword community network in patents related to Internet of Things technology. To identify the difference of IoT patent trends between Korea and USA, 100 Korea patents and 100 USA patents were collected, respectively. First, we first extracted important keywords from IoT patent abstracts using the TF-IDF weight and their correlation and then constructed the keyword network based on the selected keywords. Second, we constructed a keyword community network based on the keyword community and performed social network analysis. Our experimental results showed while Korea patents focus on the core technologies of IoT (such as security, semiconductors and image process areas), USA patents focus on the applications of IoT (such as the smart home, interactive media and telecommunications).

Deep Learning Document Analysis System Based on Keyword Frequency and Section Centrality Analysis

  • Lee, Jongwon;Wu, Guanchen;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.48-53
    • /
    • 2021
  • Herein, we propose a document analysis system that analyzes papers or reports transformed into XML(Extensible Markup Language) format. It reads the document specified by the user, extracts keywords from the document, and compares the frequency of keywords to extract the top-three keywords. It maintains the order of the paragraphs containing the keywords and removes duplicated paragraphs. The frequency of the top-three keywords in the extracted paragraphs is re-verified, and the paragraphs are partitioned into 10 sections. Subsequently, the importance of the relevant areas is calculated and compared. By notifying the user of areas with the highest frequency and areas with higher importance than the average frequency, the user can read only the main content without reading all the contents. In addition, the number of paragraphs extracted through the deep learning model and the number of paragraphs in a section of high importance are predicted.