• Title/Summary/Keyword: Text Extraction

Search Result 459, Processing Time 0.028 seconds

Design and Implementation of Web Crawler with Real-Time Keyword Extraction based on the RAKE Algorithm

  • Zhang, Fei;Jang, Sunggyun;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.395-398
    • /
    • 2017
  • We propose a web crawler system with keyword extraction function in this paper. Researches on the keyword extraction in existing text mining are mostly based on databases which have already been grabbed by documents or corpora, but the purpose of this paper is to establish a real-time keyword extraction system which can extract the keywords of the corresponding text and store them into the database together while grasping the text of the web page. In this paper, we design and implement a crawler combining RAKE keyword extraction algorithm. It can extract keywords from the corresponding content while grasping the content of web page. As a result, the performance of the RAKE algorithm is improved by increasing the weight of the important features (such as the noun appearing in the title). The experimental results show that this method is superior to the existing method and it can extract keywords satisfactorily.

Skew Compensation and Text Extraction of The Traffic Sign in Natural Scenes (자연영상에서 교통 표지판의 기울기 보정 및 덱스트 추출)

  • Choi Gyu-Dam;Kim Sung-Dong;Choi Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.3 no.2 s.5
    • /
    • pp.19-28
    • /
    • 2004
  • This paper shows how to compensate the skew from the traffic sign included in the natural image and extract the text. The research deals with the Process related to the array image. Ail the process comprises four steps. In the first fart we Perform the preprocessing and Canny edge extraction for the edge in the natural image. In the second pan we perform preprocessing and postprocessing for Hough Transform in order to extract the skewed angle. In the third part we remove the noise images and the complex lines, and then extract the candidate region using the features of the text. In the last part after performing the local binarization in the extracted candidate region, we demonstrate the text extraction by using the differences of the features which appeared between the tett and the non-text in order to select the unnecessary non-text. After carrying out an experiment with the natural image of 100 Pieces that includes the traffic sign. The research indicates a 82.54 percent extraction of the text and a 79.69 percent accuracy of the extraction, and this improved more accurate text extraction in comparison with the existing works such as the method using RLS(Run Length Smoothing) or Fourier Transform. Also this research shows a 94.5 percent extraction in respect of the extraction on the skewed angle. That improved a 26 percent, compared with the way used only Hough Transform. The research is applied to giving the information of the location regarding the walking aid system for the blind or the operation of a driverless vehicle

  • PDF

A Stroke-Based Text Extraction Algorithm for Digital Videos (디지털 비디오를 위한 획기반 자막 추출 알고리즘)

  • Jeong, Jong-Myeon;Cha, Ji-Hun;Kim, Kyu-Heon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.297-303
    • /
    • 2007
  • In this paper, the stroke-based text extraction algorithm for digital video is proposed. The proposed algorithm consists of four stages such as text detection, text localization, text segmentation and geometric verification. The text detection stage ascertains that a given frame in a video sequence contains text. This procedure is accomplished by morphological operations for the pixels with higher possibility of being stroke-based text, which is called as seed points. For the text localization stage, morphological operations for the edges including seed points ate adopted followed by horizontal and vortical projections. Text segmentation stage is to classify projected areas into text and background regions according to their intensity distribution. Finally, in the geometric verification stage, the segmented area are verified by using prior knowledge of video text characteristics.

Extraction of Text Regions from Spam-Mail Images Using Color Layers (색상레이어를 이용한 스팸메일 영상에서의 텍스트 영역 추출)

  • Kim Ji-Soo;Kim Soo-Hyung;Han Seung-Wan;Nam Taek-Yong;Son Hwa-Jeong;Oh Sung-Ryul
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.409-416
    • /
    • 2006
  • In this paper, we propose an algorithm for extracting text regions from spam-mail images using color layer. The CLTE(color layer-based text extraction) divides the input image into eight planes as color layers. It extracts connected components on the eight images, and then classifies them into text regions and non-text regions based on the component sizes. We also propose an algorithm for recovering damaged text strokes from the extracted text image. In the binary image, there are two types of damaged strokes: (1) middle strokes such as 'ㅣ' or 'ㅡ' are deleted, and (2) the first and/or last strokes such as 'ㅇ' or 'ㅁ' are filled with black pixels. An experiment with 200 spam-mail images shows that the proposed approach is more accurate than conventional methods by over 10%.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Hybrid Approach of Texture and Connected Component Methods for Text Extraction in Complex Images (복잡한 영상 내의 문자영역 추출을 위한 텍스춰와 연결성분 방법의 결합)

  • 정기철
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.175-186
    • /
    • 2004
  • We present a hybrid approach of texture-based method and connected component (CC)-based method for text extraction in complex images. Two primary methods, which are mainly utilized in this area, are sequentially merged for compensating for their weak points. An automatically constructed MLP-based texture classifier can increase recall rates for complex images with small amount of user intervention and without explicit feature extraction. CC-based filtering based on the shape information using NMF enhances the precision rate without affecting overall performance. As a result, a combination of texture and CC-based methods leads to not only robust but also efficient text extraction. We also enhance the processing speed by adopting appropriate region marking methods for each input image category.

Slab Region Localization for Text Extraction using SIFT Features (문자열 검출을 위한 슬라브 영역 추정)

  • Choi, Jong-Hyun;Choi, Sung-Hoo;Yun, Jong-Pil;Koo, Keun-Hwi;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1025-1034
    • /
    • 2009
  • In steel making production line, steel slabs are given a unique identification number. This identification number, Slab management number(SMN), gives information about the use of the slab. Identification of SMN has been done by humans for several years, but this is expensive and not accurate and it has been a heavy burden on the workers. Consequently, to improve efficiency, automatic recognition system is desirable. Generally, a recognition system consists of text localization, text extraction, character segmentation, and character recognition. For exact SMN identification, all the stage of the recognition system must be successful. In particular, the text localization is great important stage and difficult to process. However, because of many text-like patterns in a complex background and high fuzziness between the slab and background, directly extracting text region is difficult to process. If the slab region including SMN can be detected precisely, text localization algorithm will be able to be developed on the more simple method and the processing time of the overall recognition system will be reduced. This paper describes about the slab region localization using SIFT(Scale Invariant Feature Transform) features in the image. First, SIFT algorithm is applied the captured background and slab image, then features of two images are matched by Nearest Neighbor(NN) algorithm. However, correct matching rate can be low when two images are matched. Thus, to remove incorrect match between the features of two images, geometric locations of the matched two feature points are used. Finally, search rectangle method is performed in correct matching features, and then the top boundary and side boundaries of the slab region are determined. For this processes, we can reduce search region for extraction of SMN from the slab image. Most cases, to extract text region, search region is heuristically fixed [1][2]. However, the proposed algorithm is more analytic than other algorithms, because the search region is not fixed and the slab region is searched in the whole image. Experimental results show that the proposed algorithm has a good performance.

An Efficient Damage Information Extraction from Government Disaster Reports

  • Shin, Sungho;Hong, Seungkyun;Song, Sa-Kwang
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.55-63
    • /
    • 2017
  • One of the purposes of Information Technology (IT) is to support human response to natural and social problems such as natural disasters and spread of disease, and to improve the quality of human life. Recent climate change has happened worldwide, natural disasters threaten the quality of life, and human safety is no longer guaranteed. IT must be able to support tasks related to disaster response, and more importantly, it should be used to predict and minimize future damage. In South Korea, the data related to the damage is checked out by each local government and then federal government aggregates it. This data is included in disaster reports that the federal government discloses by disaster case, but it is difficult to obtain raw data of the damage even for research purposes. In order to obtain data, information extraction may be applied to disaster reports. In the field of information extraction, most of the extraction targets are web documents, commercial reports, SNS text, and so on. There is little research on information extraction for government disaster reports. They are mostly text, but the structure of each sentence is very different from that of news articles and commercial reports. The features of the government disaster report should be carefully considered. In this paper, information extraction method for South Korea government reports in the word format is presented. This method is based on patterns and dictionaries and provides some additional ideas for tokenizing the damage representation of the text. The experiment result is F1 score of 80.2 on the test set. This is close to cutting-edge information extraction performance before applying the recent deep learning algorithms.

Text Extraction In WWW Images (웹 영상에 포함된 문자 영역의 추출)

  • 김상현;심재창;김중수
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.15-18
    • /
    • 2000
  • In this paper, we propose a method for text extraction in the Web images. Our approach is based on contrast detecting and pixel component ratio analysis in mouse position. Extracted data with OCR can be used for real time dictionary call or language translation application in Web browser.

  • PDF

Biomedical Ontologies and Text Mining for Biomedicine and Healthcare: A Survey

  • Yoo, Ill-Hoi;Song, Min
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.2
    • /
    • pp.109-136
    • /
    • 2008
  • In this survey paper, we discuss biomedical ontologies and major text mining techniques applied to biomedicine and healthcare. Biomedical ontologies such as UMLS are currently being adopted in text mining approaches because they provide domain knowledge for text mining approaches. In addition, biomedical ontologies enable us to resolve many linguistic problems when text mining approaches handle biomedical literature. As the first example of text mining, document clustering is surveyed. Because a document set is normally multiple topic, text mining approaches use document clustering as a preprocessing step to group similar documents. Additionally, document clustering is able to inform the biomedical literature searches required for the practice of evidence-based medicine. We introduce Swanson's UnDiscovered Public Knowledge (UDPK) model to generate biomedical hypotheses from biomedical literature such as MEDLINE by discovering novel connections among logically-related biomedical concepts. Another important area of text mining is document classification. Document classification is a valuable tool for biomedical tasks that involve large amounts of text. We survey well-known classification techniques in biomedicine. As the last example of text mining in biomedicine and healthcare, we survey information extraction. Information extraction is the process of scanning text for information relevant to some interest, including extracting entities, relations, and events. We also address techniques and issues of evaluating text mining applications in biomedicine and healthcare.