• 제목/요약/키워드: Text Detector

검색결과 29건 처리시간 0.019초

Improving Elasticsearch for Chinese, Japanese, and Korean Text Search through Language Detector

  • Kim, Ki-Ju;Cho, Young-Bok
    • Journal of information and communication convergence engineering
    • /
    • 제18권1호
    • /
    • pp.33-38
    • /
    • 2020
  • Elasticsearch is an open source search and analytics engine that can search petabytes of data in near real time. It is designed as a distributed system horizontally scalable and highly available. It provides RESTful APIs, thereby making it programming-language agnostic. Full text search of multilingual text requires language-specific analyzers and field mappings appropriate for indexing and searching multilingual text. Additionally, a language detector can be used in conjunction with the analyzers to improve the multilingual text search. Elasticsearch provides more than 40 language analysis plugins that can process text and extract language-specific tokens and language detector plugins that can determine the language of the given text. This study investigates three different approaches to index and search Chinese, Japanese, and Korean (CJK) text (single analyzer, multi-fields, and language detector-based), and identifies the advantages of the language detector-based approach compared to the other two.

해리스 코너 검출기를 이용한 비디오 자막 영역 추출 (Text Region Extraction from Videos using the Harris Corner Detector)

  • 김원준;김창익
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권7호
    • /
    • pp.646-654
    • /
    • 2007
  • 최근 많은 TV 영상에서 시청자의 시각적 편의와 이해를 고려하여 자막을 삽입하는 경우가 늘어나고 있다. 본 논문에서는 자막을 비디오 내 하단부에 위치하는 인위적으로 추가된 글자 영역으로 정의한다. 이러한 자막 영역의 추출은 비디오 정보 검색(video information retrieval)이나 비디오 색인(video indexing)과 같은 응용에서 글자 추출을 위한 첫 단계로 널리 쓰인다. 기존의 자막 영역 추출은 자막의 색, 자막과 배경의 자기 대비, 에지(edge), 글자 필터 등을 이용한 방법을 사용하였다. 그러나 비디오 영상내 자막이 갖는 낮은 해상도와 복잡한 배경으로 인해 자막 추출에 어려움이 있다. 이에 본 논문은 코너검출기(corner detector)를 이용한 효율적인 비디오 자막 영역 추출 방법을 제안하고자 한다. 제안하는 알고리즘은 해리스 코너 검출기를 이용한 코너 맵 생성, 코너 밀도를 이용한 자막 영역 후보군 추출, 레이블링(labeling)을 이용한 최종 자막 영역 결정, 노이즈(noise) 제거 및 영역 채우기의 네 단계로 구성된다. 제안하는 알고리즘은 색 정보를 이용하지 않기 때문에 여러 가지 색으로 표현되는 자막 영역 추출에 적용가능하며 글자 모양이 아닌 글자의 코너를 이용하기 때문에 언어의 종류에 관계없이 사용 될 수 있다. 또한 프레임간 자막 영역 업데이트를 통해 자막 영역 추출의 효율을 높였다. 다양한 영상에 대한 실험을 통해 제안하는 알고리즘이 효율적인 비디오 자막 영역 추출 방법임을 보이고자 한다.

모바일 디바이스 화면의 클릭 가능한 객체 탐지를 위한 싱글 샷 디텍터 (Single Shot Detector for Detecting Clickable Object in Mobile Device Screen)

  • 조민석;전혜원;한성수;정창성
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제11권1호
    • /
    • pp.29-34
    • /
    • 2022
  • 모바일 디바이스 화면상의 클릭 가능한 객체를 인지하기 위한 데이터셋을 구축하고 새로운 네트워크 구조를 제안한다. 모바일 디바이스 화면에서 클릭 가능한 객체를 기준으로 다양한 해상도를 가진 디바이스에서 여러 애플리케이션을 대상으로 데이터를 수집하였다. 총 24,937개의 annotation data를 text, edit text, image, button, region, status bar, navigation bar의 7개 카테고리로 세분화하였다. 해당 데이터셋을 학습하기 위한 모델 구조는 Deconvolution Single Shot Detector를 베이스라인으로, backbone network는 기존 ResNet에 Squeeze-and-Excitation block을 추가한 Squeeze-and-Excitation networks를 사용하고, Single shot detector layers와 Deconvolution module을 Feature pyramid networks 형태로 쌓아 올려 header와 연결한다. 또한, 기존 input resolution의 1:1 비율에서 오는 특징의 손실을 최소화하기 위해 모바일 디바이스 화면과 유사한 1:2 비율로 변경하였다. 해당 모델을 구축한 데이터셋에 대하여 실험한 결과 베이스라인에 대비하여 mean average precision이 최대 101% 개선되었다.

An Optimized e-Lecture Video Search and Indexing framework

  • Medida, Lakshmi Haritha;Ramani, Kasarapu
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.87-96
    • /
    • 2021
  • The demand for e-learning through video lectures is rapidly increasing due to its diverse advantages over the traditional learning methods. This led to massive volumes of web-based lecture videos. Indexing and retrieval of a lecture video or a lecture video topic has thus proved to be an exceptionally challenging problem. Many techniques listed by literature were either visual or audio based, but not both. Since the effects of both the visual and audio components are equally important for the content-based indexing and retrieval, the current work is focused on both these components. A framework for automatic topic-based indexing and search depending on the innate content of the lecture videos is presented. The text from the slides is extracted using the proposed Merged Bounding Box (MBB) text detector. The audio component text extraction is done using Google Speech Recognition (GSR) technology. This hybrid approach generates the indexing keywords from the merged transcripts of both the video and audio component extractors. The search within the indexed documents is optimized based on the Naïve Bayes (NB) Classification and K-Means Clustering models. This optimized search retrieves results by searching only the relevant document cluster in the predefined categories and not the whole lecture video corpus. The work is carried out on the dataset generated by assigning categories to the lecture video transcripts gathered from e-learning portals. The performance of search is assessed based on the accuracy and time taken. Further the improved accuracy of the proposed indexing technique is compared with the accepted chain indexing technique.

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

Stroke Width Based Skeletonization for Text Images

  • Nguyen, Minh Hieu;Kim, Soo-Hyung;Yang, Hyung Jeong;Lee, Guee Sang
    • Journal of Computing Science and Engineering
    • /
    • 제8권3호
    • /
    • pp.149-156
    • /
    • 2014
  • Skeletonization is a morphological operation that transforms an original object into a subset, which is called a 'skeleton'. Skeletonization has been intensively studied for decades and is a challenging issue especially for special target objects. This paper proposes a novel approach to the skeletonization of text images based on stroke width detection. First, the preliminary skeleton is detected by using a Canny edge detector with a Tensor Voting framework. Second, the preliminary skeleton is smoothed, and junction points are connected by interpolation compensation. Experimental results show the validity of the proposed approach.

Interactive Typography System using Combined Corner and Contour Detection

  • Lim, Sooyeon;Kim, Sangwook
    • International Journal of Contents
    • /
    • 제13권1호
    • /
    • pp.68-75
    • /
    • 2017
  • Interactive Typography is a process where a user communicates by interacting with text and a moving factor. This research covers interactive typography using real-time response to a user's gesture. In order to form a language-independent system, preprocessing of entered text data presents image data. This preprocessing is followed by recognizing the image data and the setting interaction points. This is done using computer vision technology such as the Harris corner detector and contour detection. User interaction is achieved using skeleton information tracked by a depth camera. By synchronizing the user's skeleton information acquired by Kinect (a depth camera,) and the typography components (interaction points), all user gestures are linked with the typography in real time. An experiment was conducted, in both English and Korean, where users showed an 81% satisfaction level using an interactive typography system where text components showed discrete movements in accordance with the users' gestures. Through this experiment, it was possible to ascertain that sensibility varied depending on the size and the speed of the text and interactive alteration. The results show that interactive typography can potentially be an accurate communication tool, and not merely a uniform text transmission system.