• Title/Summary/Keyword: 문자 인식 기술

Search Result 297, Processing Time 0.026 seconds

Robust Motorbike License Plate Detection and Recognition using Image Warping based on YOLOv2 (YOLOv2 기반의 영상 워핑을 이용한 강인한 오토바이 번호판 검출 및 인식)

  • Dang, Xuan Truong;Kim, Eung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.17-20
    • /
    • 2019
  • 번호판 자동인식 (ALPR: Automatic License Plate Recognition)은 지능형 교통시스템 및 비디오 감시 시스템 등 많은 응용 분야에서 필요한 기술이다. 대부분의 연구는 자동차를 대상으로 번호판 감지 및 인식을 연구하였고, 오토바이를 대상으로 번호판 감지 및 인식은 매우 적은 편이다. 자동차의 경우 번호판이 차량의 전방 또는 후방 중앙에 위치하며 번호판의 뒷배경은 주로 단색으로 덜 복잡한 편이다. 그러나 오토바이의 경우 킥 스탠드를 이용하여 세우기 때문에 주차할 때 오토바이는 다양한 각도로 기울어져 있으므로 번호판의 글자 및 숫자 인식하는 과정이 훨씬 더 복잡하다. 본 논문에서는 다양한 각도로 주차된 오토바이 데이트세트에 대하여 번호판의 문자 인식 정확도를 높이기 위하여 2-스테이지 YOLOv2 알고리즘을 사용하여 오토바이 영역을 선 검출 후 번호판 영역을 검지한다. 인식률을 높이기 위해 앵커박스의 사이즈와 개수를 오토바이 특성에 맞추어 조절하였다. 그 후 기울어진 번호판을 검출한 후 영상 워핑(Image Warping) 알고리즘을 적용하였다. 모의실험 결과, 기존 방식의 인식률이 47,74%에 비해 제안된 방식은 80.23%의 번호판의 인식률을 얻었다. 제안된 방법은 전체적으로 오토바이 번호판 특성에 맞는 앵커박스와 이미지 워핑을 통해서 다양한 기울기의 오토바이 번호판 문자 인식을 높일 수 있었다.

  • PDF

Design and Implementation of Personal Information Identification and Masking System Based on Image Recognition (이미지 인식 기반 향상된 개인정보 식별 및 마스킹 시스템 설계 및 구현)

  • Park, Seok-Cheon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.1-8
    • /
    • 2017
  • Recently, with the development of ICT technology such as cloud and mobile, image utilization through social networks is increasing rapidly. These images contain personal information, and personal information leakage accidents may occur. As a result, studies are underway to recognize and mask personal information in images. However, optical character recognition, which recognizes personal information in images, varies greatly depending on brightness, contrast, and distortion, and Korean recognition is insufficient. Therefore, in this paper, we design and implement a personal information identification and masking system based on image recognition through deep learning application using CNN algorithm based on optical character recognition method. Also, the proposed system and optical character recognition compares and evaluates the recognition rate of personal information on the same image and measures the face recognition rate of the proposed system. Test results show that the recognition rate of personal information in the proposed system is 32.7% higher than that of optical character recognition and the face recognition rate is 86.6%.

A Study of Authentication Design for Youth (청소년을 위한 인증시스템의 설계에 관한 연구)

  • Hong, Ki-Cheon;Kim, Eun-Mi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.4
    • /
    • pp.952-960
    • /
    • 2007
  • Most Websites perform login process for authentication. But simple features like ID and Password have no trust because most people worry about appropriation. So the youth can easily access illegal media sites using other's ID and Password. Therefore this paper examine features be adaptable to authentication system, and propose a design of authentication system using multiple features. A proposed authentication system has two categories, such as low-level and high-level method. Low-level method consists of grant of authentication number through mobile phone from server and certificate from authority. High-level method combines ID/Password and features of fingerprint, character, voice, face recognition systems. For this, this paper surveys six recognition systems such as fingerprint, face, iris, character, vein, voice recognition system. Among these, fingerprint, character, voice, face recognition systems can be easily implemented in personal computer with low cost accessories. Usage of multiple features can improve reliability of authentication.

  • PDF

The Improving Method of Characters Recognition Using New Recurrent Neural Network (새로운 순환신경망을 사용한 문자인식성능의 향상 방안)

  • 정낙우;김병기
    • Journal of the Korea Society of Computer and Information
    • /
    • v.1 no.1
    • /
    • pp.129-138
    • /
    • 1996
  • In the result of Industrial development. largeness and highness of techniques. a large amount of Information Is being treated every year. Achive informationization. we must store in computer ,all informations written on paper for a long time and be able to utilize them In right time and place. There Is recurrent neural network as a model rousing the output value In learning neural network for characters recognition. But most of these methods are not so effectively applied to it. This study suggests a new type of recurrent neural network to classifyeffectively the static patterns such as off-line handwritten characters. This study shows that this new type Is better than those of before in recognizing the patterns. such as figures and handwritten characters, by using the new J-E (Jordan-Elman) neural network model in which enlarges and combines Jordan and Elman Model.

  • PDF

Structuring of Pulmonary Function Test Paper Using Deep Learning

  • Jo, Sang-Hyun;Kim, Dae-Hoon;Kim, Yoon;Kwon, Sung-Ok;Kim, Woo-Jin;Lee, Sang-Ah
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.61-67
    • /
    • 2021
  • In this paper, we propose a method of extracting and recognizing related information for research from images of the unstructured pulmonary function test papers using character detection and recognition techniques. Also, we develop a post-processing method to reduce the character recognition error rate. The proposed structuring method uses a character detection model for the pulmonary function test paper images to detect all characters in the test paper and passes the detected character image through the character recognition model to obtain a string. The obtained string is reviewed for validity using string matching and structuring is completed. We confirm that our proposed structuring system is a more efficient and stable method than the structuring method through manual work of professionals because our system's error rate is within about 1% and the processing speed per pulmonary function test paper is within 2 seconds.

Image Processing for Mobile Information Retrieval Service (모바일정보검색 서비스를 위한 문자 인식)

  • Lim, Myung-Jae;Hyun, Sung-Kyung;Park, Ji-Eun;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.1
    • /
    • pp.103-108
    • /
    • 2011
  • The modern society with the wide spread recognition of the importance of informatics and for the development of information and communication technology is rapidly processing. Especially the rapid development in mobile technology boost up the general expectation that one can get the information he wants anytime anywhere. Accordingly image search for the convenient information retrieval is becoming common. However general image search has difficulties because inexactitude extracting character in the image and getting the detail information in extracted character. Therefore these paper make character recognition through the images that I photographed a sightseeing resort, a signboard of a lot of stores to a smart phone camera, so information offer to be convenient to users is a purpose. A user can get detailed information, by character extraction way called top-hat algorithm and connect to a server.

A Method of Detecting Character Data through a Adaboost Learning Method (에이다부스트 학습을 이용한 문자 데이터 검출 방법)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.655-661
    • /
    • 2017
  • It is a very important task to extract character regions contained in various input color images, because characters can provide significant information representing the content of an image. In this paper, we propose a new method for extracting character regions from various input images using MCT features and an AdaBoost algorithm. Using geometric features, the method extracts actual character regions by filtering out non-character regions from among candidate regions. Experimental results show that the suggested algorithm accurately extracts character regions from input images. We expect the suggested algorithm will be useful in multimedia and image processing-related applications, such as store signboard detection and car license plate recognition.

Human Interface Software for Wireless and Mobile Devices (무선 이동 통신 기기용 휴먼인터페이스 소프트웨어)

  • Kim, Se-Ho;Lee, Chan-Gun
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.1
    • /
    • pp.57-65
    • /
    • 2010
  • Recently, the character recognization technique is strongly needed to enable the mobile communication devices with cameras to gather input information from the users. In general, it is not easy to reuse a CBOCR(Camera Based Optical Character Recognizer) module because of its dependency on a specific platform. In this paper, we propose a software architecture for CBOCR module providing the easy adaptability to various mobile communication platforms. The proposed architecture is composed of the platform dependency support layer, the interface layer, the engine support layer, and the engine layer. The engine layer adopts a plug-in data structure to support various hardware endian policies. We show the effectiveness of the proposed method by applying the architecture to a practical product.