• Title/Summary/Keyword: Character Detection

Search Result 249, Processing Time 0.142 seconds

Text Region Extraction and OCR on Camera Based Images (카메라 영상 위에서의 문자 영역 추출 및 OCR)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.1
    • /
    • pp.59-66
    • /
    • 2010
  • Traditional OCR engines are designed to the scanned documents in calibrated environment. Three dimensional perspective distortion and smooth distortion in images are critical problems caused by un-calibrated devices, e.g. image from smart phones. To meet the growing demand of character recognition of texts embedded in the photos acquired from the non-calibrated hand-held devices, we address the problem in three categorical aspects: rotational invariant method of text region extraction, scale invariant method of text line segmentation, and three dimensional perspective mapping. With the integration of the methods, we developed an OCR for camera-captured images.

Detection of Intersection Points of Handwritten Hangul Strokes using Run-length (런 길이를 이용한 필기체 한글 자획의 교점 검출)

  • Jung, Min-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.5
    • /
    • pp.887-894
    • /
    • 2006
  • This paper proposes a new method that detects the intersection points of handwritten Hangul strokes using run-length. The method firstly finds the strokes' width of handwritten Hangul characters using both horizontal and vertical run-lengths, secondly extracts horizontal and vertical strokes of a character utilizing the strokes' width, and finally detects the intersection points of the strokes exploiting horizontal and vertical strokes. The analysis of both the horizontal and the vertical strokes doesn't use the strokes' angles but both the strokes' width and the changes of the run-lengths. The intersection points of the strokes become the candidated parts for phoneme segmentation, which is one of main techniques for off-line handwritten Hangul recognition. The segmented strokes represent the feature for handwritten Hangul recognition.

  • PDF

Study on Characteristic difference of Semiconductor Radiation Detectors fabricated with a wet coating process

  • Choi, Chi-Won;Cho, Sung-Ho;Yun, Min-Suk;Kang, Sang-Sik;Park, Ji-Koon;Nam, Sang-Hee
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.06a
    • /
    • pp.192-193
    • /
    • 2006
  • The wet coating process could easily be made from large area film with printing paste mixed with semiconductor and binder material at room temperature. Semiconductor film fabricated about 25mm thickness was evaluated by field emissions-canning electron microscopy (FE-SEM). X-ray performance data such as dark current, sensitivity and signal to noise ratio (SNR) were evaluated. The $Hgl_2$ semiconductor was shown in much lower dark current than the others, but the best sensitivity. In this paper, reactivity and combination character of semiconductor and binder material that affect electrical and X-ray detection properties would prove out though experimental results.

  • PDF

Appearance Information Extraction and Shading for Realistic Caricature Generation (실사형 캐리커처 생성을 위한 형태 정보 추출 및 음영 함성)

  • Park, Yeon-Chool;Oh, Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.257-266
    • /
    • 2004
  • This paper proposes caricature generation system that uses shading mechanism that extracts textural features of face. Using this method, we can get more realistic caricature. Since this system If vector-based, the generated character's face has no size limit and constraint. so it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, owing to the vector file's advantage, It can be used in mobile environment as small file size This paper presents methods that generate vector-based face, create shade and synthesize the shade with the vector face.

String analysis for detection of injection flaw in Web applications (웹 응용프로그램의 삽입취약점 탐지를 위한 문자열분석)

  • Choi, Tae-Hyoung;Kim, Jung-Joon;Doh, Kyung-Goo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.17 no.6
    • /
    • pp.149-153
    • /
    • 2007
  • One common type of web-application vulnerabilities is injection flaw, where an attacker exploits faulty application code instead of normal input. In order to be free from injection flaw, an application program should be written in such a way that every potentially bad input character is filtered out. This paper proposes a precise analysis that statically checks whether or not an input string variable may have the given set of characters at hotspot. The precision is accomplished by taking the semantics of condition into account in the analysis.

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

An Edge AI Device based Intelligent Transportation System

  • Jeong, Youngwoo;Oh, Hyun Woo;Kim, Soohee;Lee, Seung Eun
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.166-173
    • /
    • 2022
  • Recently, studies have been conducted on intelligent transportation systems (ITS) that provide safety and convenience to humans. Systems that compose the ITS adopt architectures that applied the cloud computing which consists of a high-performance general-purpose processor or graphics processing unit. However, an architecture that only used the cloud computing requires a high network bandwidth and consumes much power. Therefore, applying edge computing to ITS is essential for solving these problems. In this paper, we propose an edge artificial intelligence (AI) device based ITS. Edge AI which is applicable to various systems in ITS has been applied to license plate recognition. We implemented edge AI on a field-programmable gate array (FPGA). The accuracy of the edge AI for license plate recognition was 0.94. Finally, we synthesized the edge AI logic with Magnachip/Hynix 180nm CMOS technology and the power consumption measured using the Synopsys's design compiler tool was 482.583mW.

A Real-time Bus Arrival Notification System for Visually Impaired Using Deep Learning (딥 러닝을 이용한 시각장애인을 위한 실시간 버스 도착 알림 시스템)

  • Seyoung Jang;In-Jae Yoo;Seok-Yoon Kim;Youngmo Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.24-29
    • /
    • 2023
  • In this paper, we propose a real-time bus arrival notification system using deep learning to guarantee movement rights for the visually impaired. In modern society, by using location information of public transportation, users can quickly obtain information about public transportation and use public transportation easily. However, since the existing public transportation information system is a visual system, the visually impaired cannot use it. In Korea, various laws have been amended since the 'Act on the Promotion of Transportation for the Vulnerable' was enacted in June 2012 as the Act on the Movement Rights of the Blind, but the visually impaired are experiencing inconvenience in using public transportation. In particular, from the standpoint of the visually impaired, it is impossible to determine whether the bus is coming soon, is coming now, or has already arrived with the current system. In this paper, we use deep learning technology to learn bus numbers and identify upcoming bus numbers. Finally, we propose a method to notify the visually impaired by voice that the bus is coming by using TTS technology.

  • PDF

A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm (블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법)

  • 전병태;이재연;배영래
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.113-122
    • /
    • 2000
  • It is sometimes necessary to remove the captions and recover original images from video images already broadcast, When the number of images requiring such recovery is small, manual processing is possible, but as the number grows it would be very difficult to do it manually. Therefore, a method for recovering original image for the caption areas in needed. Traditional research on image restoration has focused on restoring blurred images to sharp images using frequency filtering or video coding for transferring video images. This paper proposes a method for automatically recovering original image using BMA(Block Matching Algorithm). We extract information on caption regions and scene change that is used as a prior-knowledge for recovering original image. From the result of caption information detection, we know the start and end frames of captions in video and the character areas in the caption regions. The direction for the recovery is decided using information on the scene change and caption region(the start and end frame for captions). According to the direction, we recover the original image by performing block matching for character components in extracted caption region. Experimental results show that the case of stationary images with little camera or object motion is well recovered. We see that the case of images with motion in complex background is also recovered.

  • PDF

Evaluation of Incident Detection Algorithms focused on APID, DES, DELOS and McMaster (돌발상황 검지알고리즘의 실증적 평가 (APID, DES, DELOS, McMaster를 중심으로))

  • Nam, Doo-Hee;Baek, Seung-Kirl;Kim, Sang-Gu
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.119-129
    • /
    • 2004
  • This paper is designed to report the results of development and validation procedures in relation to the Freeway Incident Management System (FIMS) prototype development as part of Intelligent Transportation Systems Research and Development program. The central core of the FIMS is an integration of the component parts and the modular, but the integrated system for freeway management. The whole approach has been component-orientated, with a secondary emphasis being placed on the traffic characteristics at the sites. The first action taken during the development process was the selection of the required data for each components within the existing infrastructure of Korean freeway system. After through review and analysis of vehicle detection data, the pilot site led to the utilization of different technologies in relation to the specific needs and character of the implementation. This meant that the existing system was tested in a different configuration at different sections of freeway, thereby increasing the validity and scope of the overall findings. The incident detection module has been performed according to predefined system validation specifications. The system validation specifications have identified two component data collection and analysis patterns which were outlined in the validation specifications; the on-line and off-line testing procedural frameworks. The off-line testing was achieved using asynchronous analysis, commonly in conjunction with simulation of device input data to take full advantage of the opportunity to test and calibrate the incident detection algorithms focused on APID, DES, DELOS and McMaster. The simulation was done with the use of synchronous analysis, thereby providing a means for testing the incident detection module.