• Title/Summary/Keyword: 텍스트 영역

Search Result 400, Processing Time 0.029 seconds

Techniques for Location Mapping and Querying of Geo-Texts in Web Documents (웹 문서상의 공간 텍스트 위치 맵핑과 질의 기법)

  • Ha, Tae Seok;Nam, Kwang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.3
    • /
    • pp.1-10
    • /
    • 2022
  • With the development of web technology, large amounts of web documents are being produced. This web document contains various spatial texts, and by converting these texts into spatial information, it is the basis for searching for text documents with spatial query. These spatial texts consist of a wide range of areas, including postal codes and local phone numbers, as well as administrative place names and POI names. This paper presents algorithms that can map locations based on spatial text information existing within web documents. Through these algorithms, web documents can be searched for documents describing the region on a map rather than a general web search. In this paper, we demonstrated the presented algorithms are useful by implementing a web geo-text query system.

Text Region Verification in Natural Scene Images using Multi-resolution Wavelet Transform and Support Vector Machine (다해상도 웨이블릿 변환과 써포트 벡터 머신을 이용한 자연영상에서의 문자 영역 검증)

  • Bae Kyungsook;Choi Youngwoo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.667-674
    • /
    • 2004
  • Extraction of texts from images is a fundamental and important problem to understand the images. This paper suggests a text region verification method by statistical means of stroke features of the characters. The method extracts 36 dimensional features from $16\times16$sized text and non-text images using wavelet transform - these 36 dimensional features express stroke and direction of characters - and select 12 sub-features out of 36 dimensional features which yield adequate separation between classes. After selecting the features, SVM trains the selected features. For the verification of the text region, each $16\times16$image block is scanned and classified as text or non-text. Then, the text region is finally decided as text region or non-text region. The proposed method is able to verify text regions which can hardly be distin guished.

Word Extraction from Table Regions in Document Images (문서 영상 내 테이블 영역에서의 단어 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.369-378
    • /
    • 2005
  • Document image is segmented and classified into text, picture, or table by a document layout analysis, and the words in table regions are significant for keyword spotting because they are more meaningful than the words in other regions. This paper proposes a method to extract words from table regions in document images. As word extraction from table regions is practically regarded extracting words from cell regions composing the table, it is necessary to extract the cell correctly. In the cell extraction module, table frame is extracted first by analyzing connected components, and then the intersection points are extracted from the table frame. We modify the false intersections using the correlation between the neighboring intersections, and extract the cells using the information of intersections. Text regions in the individual cells are located by using the connected components information that was obtained during the cell extraction module, and they are segmented into text lines by using projection profiles. Finally we divide the segmented lines into words using gap clustering and special symbol detection. The experiment performed on In table images that are extracted from Korean documents, and shows $99.16\%$ accuracy of word extraction.

A New Method for Nonparametric Document Layout Analysis (매개변수에 무관한 새로운 문서 구조 분석 방법)

  • 류대석;강선미;이성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.482-484
    • /
    • 1999
  • 본 논문에서는 매개변수 없이 입력 문서 영상을 최대 동질 영역들로 분할한 다음, 각 동질 영역을 텍스트, 그림, 표 그리고 선으로 자동 분류하는 새로운 방법을 제안한다. 다단계 분석과 하향식 접근 방법을 사용하기 위하여 문서 영상을 피라미드 구조로 계층화하였으며, 어떤 영역을 분할할 지의 여부를 결정하기 위하여 그 영역의 주기성을 이용하여 판단하였다. 이러한 주기성 정보를 이용함으로써, 어떠한 매개변수 없이도 활자체 크기와 행간에 무관하게 텍스트 영역을 정확히 분석할 수 있었으며, 피라미드 구조를 만드는데 걸리는 시간이 질감 분석 접근방법보다 빠른 방법으로 설계되었다. Washington 대학의 문서 영상 데이터베이스를 이용한 실험 결과, 제안된 방법이 기존의 방법들보다 더 정확하게 문서 영상을 분할 및 분류할 수 있음을 확인할 수 있었다.

  • PDF

Rotation-robust text localization technique using deep learning (딥러닝 기반의 회전에 강인한 텍스트 검출 기법)

  • Choi, In-Kyu;Kim, Jewoo;Song, Hyok;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.80-81
    • /
    • 2019
  • 본 논문에서는 자연스러운 장면 영상에서 임의의 방향성을 가진 텍스트를 검출하기 위한 기법을 제안한다. 텍스트 검출을 위한 기본적인 프레임 워크는 Faster R-CNN[1]을 기반으로 한다. 먼저 RPN(Region Proposal Network)을 통해 다른 방향성을 가진 텍스트를 포함하는 bounding box를 생성한다. 이어서 RPN에서 생성한 각각의 bounding box에 대해 세 가지의 서로 다른 크기로 pooling된 특징지도를 추출하고 병합한다. 병합한 특징지도에서 텍스트와 텍스트가 아닌 대상에 대한 score, 정렬된 bounding box 좌표, 기울어진 bounding box 좌표를 모두 예측한다. 마지막으로 NMS(Non-Maximum Suppression)을 이용하여 검출 결과를 획득한다. COCO Text 2017 dataset[2]을 이용하여 학습 및 테스트를 진행하였으며 주관적으로 평가한 결과 기울어진 텍스트에 적합하게 회전된 영역을 얻을 수 있음을 확인하였다.

  • PDF

A Transition of Informetrics and Its Application : With Relation to Information Service (계량정보학의 변천과 응용에 관한 고찰 -정보서비스를 중심으로-)

  • 장우권
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 1996.08a
    • /
    • pp.101-104
    • /
    • 1996
  • 학문은 다양한 이론적 배경을 토대로 시대적 환경에 적응하여 발전한다. 즉, 서로의 영역을 공유하면서 새로운 이론을 창출하고 실제로 이를 응용하고 있는 것이다. 계량서지학, 계량과학학, 문헌과학학등으로 일컫고 있는 계량정보학은 문헌의 분석을 위해 수량학적 방법으로 적용하여 연구하는 학문으로, 활발히 연구되어 응용되고있는 분야는 텍스트검색시스템, OPACs, 비디오텍스시스템, 하이퍼텍스트시스템, CD-ROM, 온라인 정보서비스, 전자출판, 전자우편, 케이블 TV 등의 전자정보서비스 분야이다. 본 연구에서는 계량정보학의 사적변천과 연구영역, 그 응용과 실제를 고찰하였다.

  • PDF

Multi-modal Image Processing for Improving Recognition Accuracy of Text Data in Images (이미지 내의 텍스트 데이터 인식 정확도 향상을 위한 멀티 모달 이미지 처리 프로세스)

  • Park, Jungeun;Joo, Gyeongdon;Kim, Chulyun
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.148-158
    • /
    • 2018
  • The optical character recognition (OCR) is a technique to extract and recognize texts from images. It is an important preprocessing step in data analysis since most actual text information is embedded in images. Many OCR engines have high recognition accuracy for images where texts are clearly separable from background, such as white background and black lettering. However, they have low recognition accuracy for images where texts are not easily separable from complex background. To improve this low accuracy problem with complex images, it is necessary to transform the input image to make texts more noticeable. In this paper, we propose a method to segment an input image into text lines to enable OCR engines to recognize each line more efficiently, and to determine the final output by comparing the recognition rates of CLAHE module and Two-step module which distinguish texts from background regions based on image processing techniques. Through thorough experiments comparing with well-known OCR engines, Tesseract and Abbyy, we show that our proposed method have the best recognition accuracy with complex background images.

Seal Detection in Scanned Documents (스캔된 문서에서의 도장 검출)

  • Yu, Kyeonah;Kim, Kyung-Hye
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.65-73
    • /
    • 2013
  • As the advent of the digital age, documents are often scanned to be archived or to be transmitted over the network. The largest proportion of documents is texts and the next is seal images indicating the author of the documents. While a lot of research has been conducted to recognize texts in scanned documents and commercialized text recognizing products are developed as highlighted the importance of the scanned document, information about seal images is discarded. In this paper, we study how to extract the seal image area from the color or black and white document containing the seal image and how to save the seal image. We propose a preprocessing step to remove other components except for the candidate outlines of the seal imprint from scanned documents and a method to select the final region of interest from these candidates by using the feature of seal images. Also in case of a seal imprint overlapped with texts, the most similar image among those stored in the database is selected through the template matching process. We verify the implemented system for a various type of documents produced in schools and analyze the results.

A Study on Extracting the Document Text for Unallocated Areas of Data Fragments (비할당 영역 데이터 파편의 문서 텍스트 추출 방안에 관한 연구)

  • Yoo, Byeong-Yeong;Park, Jung-Heum;Bang, Je-Wan;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.6
    • /
    • pp.43-51
    • /
    • 2010
  • It is meaningful to investigate data in unallocated space because we can investigate the deleted data. Consecutively complete file recovery using the File Carving is possible in unallocated area, but noncontiguous or incomplete data recovery is impossible. Typically, the analysis of the data fragments are needed because they should contain large amounts of information. Microsoft Word, Excel, PowerPoint and PDF document file's text are stored using compression or specific document format. If the part of aforementioned document file was stored in unallocated data fragment, text extraction is possible using specific document format. In this paper, we suggest the method of extracting a particular document file text in unallocated data fragment.

A Still Image Compression System with a High Quality Text Compression Capability (고 품질 텍스트 압축 기능을 지원하는 정지영상 압축 시스템)

  • Lee, Je-Myung;Lee, Ho-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.275-302
    • /
    • 2007
  • We propose a novel still image compression system which supports a high quality text compression function. The system segments the text from the image and compresses the text with a high quality. The system shows 48:1 high compression ratio using context-based adaptive binary arithmetic coding. The arithmetic coding performs the high compression by the codeblocks in the bitplane. The input of the system consists of a segmentation mode and a ROI(Region Of Interest) mode. In segmentation mode, the input image is segmented into a foreground consisting of text and a background consisting of the remaining region. In ROI mode, the input image is represented by the region of interest window. The high quality text compression function with a high compression ratio shows that the proposed system can be comparable with the JPEG2000 products. This system also uses gray coding to improve the compression ratio.