• Title/Summary/Keyword: License plate detection

Search Result 108, Processing Time 0.017 seconds

Integrated Video Analytics for Drone Captured Video (드론 영상 종합정보처리 및 분석용 시스템 개발)

  • Lim, SongWon;Cho, SungMan;Park, GooMan
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.243-250
    • /
    • 2019
  • In this paper, we propose a system for processing and analyzing drone image information which can be applied variously in disasters-security situation. The proposed system stores the images acquired from the drones in the server, and performs image processing and analysis according to various scenarios. According to each mission, deep-learning method is used to construct an image analysis system in the images acquired by the drone. Experiments confirm that it can be applied to traffic volume measurement, suspect and vehicle tracking, survivor identification and maritime missions.

Implementation of Deep Learning-Based Vehicle Model and License Plate Recognition System (딥러닝 기반 자동차 모델 및 번호판 인식 시스템 구현)

  • Ham, Kyoung-Youn;Kang, Gil-Nam;Lee, Jang-Hyeon;Lee, Jung-Woo;Park, Dong-Hoon;Ryoo, Myung-Chun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.465-466
    • /
    • 2022
  • 본 논문에서는 딥러닝 영상인식 기술을 활용한 객체검출 모델인 YOLOv4를 활용하여 차량의 모델과 번호판인식 시스템을 제안한다. 본 논문에서 제안하는 시스템은 실시간 영상처리기술인 YOLOv4를 사용하여 차량모델 인식과 번호판 영역 검출을 하고, CNN(Convolutional Neural Network)알고리즘을 이용하여 번호판의 글자와 숫자를 인식한다. 이러한 방법을 이용한다면 카메라 1대로 차량의 모델 인식과 번호판 인식이 가능하다. 차량모델 인식과 번호판 영역 검출에는 실제 데이터를 사용하였으며, 차량 번호판 문자 인식의 경우 실제 데이터와 가상 데이터를 사용하였다. 차량 모델 인식 정확도는 92.3%, 번호판 검출 98.9%, 번호판 문자 인식 94.2%를 기록하였다.

  • PDF

ONNX-based Runtime Performance Analysis: YOLO and ResNet (ONNX 기반 런타임 성능 분석: YOLO와 ResNet)

  • Jeong-Hyeon Kim;Da-Eun Lee;Su-Been Choi;Kyung-Koo Jun
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.89-100
    • /
    • 2024
  • In the field of computer vision, models such as You Look Only Once (YOLO) and ResNet are widely used due to their real-time performance and high accuracy. However, to apply these models in real-world environments, factors such as runtime compatibility, memory usage, computing resources, and real-time conditions must be considered. This study compares the characteristics of three deep model runtimes: ONNX Runtime, TensorRT, and OpenCV DNN, and analyzes their performance on two models. The aim of this paper is to provide criteria for runtime selection for practical applications. The experiments compare runtimes based on the evaluation metrics of time, memory usage, and accuracy for vehicle license plate recognition and classification tasks. The experimental results show that ONNX Runtime excels in complex object detection performance, OpenCV DNN is suitable for environments with limited memory, and TensorRT offers superior execution speed for complex models.

Development of a parking control system that improves the accuracy and reliability of vehicle entry and exit based on LIDAR sensing detection

  • Park, Jeong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.9-21
    • /
    • 2022
  • In this paper, we developed a 100% detection system for entering and leaving vehicles by improving the detection rate of existing detection cameras based on the LiDAR sensor, which is one of the core technologies of the 4th industrial revolution. Since the currently operating parking lot depends only on the recognition rate of the license plate number of about 98%, there are various problems such as inconsistency in the entry/exit count, inability to make a reservation in advance due to inaccurate information provision, and inconsistency in real-time parking information. Parking status information should be managed with 100% accuracy, and for this, we built a parking lot entrance/exit detection system using LIDAR. When a parking system is developed by applying the LIDAR sensor, which is mainly used to detect vehicles and objects in autonomous vehicles, it is possible to improve the accuracy of vehicle entry/exit information and the reliability of the entry/exit count with the detected sensing information. The resolution of LIDAR was guaranteed to be 100%, and it was possible to implement so that the sum of entering (+) and exiting (-) vehicles in the parking lot was 0. As a result of testing with 3,000 actual parking lot entrances and exits, the accuracy of entering and exiting parking vehicles was 100%.

Recognition of Flat Type Signboard using Deep Learning (딥러닝을 이용한 판류형 간판의 인식)

  • Kwon, Sang Il;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.219-231
    • /
    • 2019
  • The specifications of signboards are set for each type of signboards, but the shape and size of the signboard actually installed are not uniform. In addition, because the colors of the signboard are not defined, so various colors are applied to the signboard. Methods for recognizing signboards can be thought of as similar methods of recognizing road signs and license plates, but due to the nature of the signboards, there are limitations in that the signboards can not be recognized in a way similar to road signs and license plates. In this study, we proposed a methodology for recognizing plate-type signboards, which are the main targets of illegal and old signboards, and automatically extracting areas of signboards, using the deep learning-based Faster R-CNN algorithm. The process of recognizing flat type signboards through signboard images captured by using smartphone cameras is divided into two sequences. First, the type of signboard was recognized using deep learning to recognize flat type signboards in various types of signboard images, and the result showed an accuracy of about 71%. Next, when the boundary recognition algorithm for the signboards was applied to recognize the boundary area of the flat type signboard, the boundary of flat type signboard was recognized with an accuracy of 85%.

A Method of Detecting Character Data through a Adaboost Learning Method (에이다부스트 학습을 이용한 문자 데이터 검출 방법)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.655-661
    • /
    • 2017
  • It is a very important task to extract character regions contained in various input color images, because characters can provide significant information representing the content of an image. In this paper, we propose a new method for extracting character regions from various input images using MCT features and an AdaBoost algorithm. Using geometric features, the method extracts actual character regions by filtering out non-character regions from among candidate regions. Experimental results show that the suggested algorithm accurately extracts character regions from input images. We expect the suggested algorithm will be useful in multimedia and image processing-related applications, such as store signboard detection and car license plate recognition.

Image Super-Resolution for Improving Object Recognition Accuracy (객체 인식 정확도 개선을 위한 이미지 초해상도 기술)

  • Lee, Sung-Jin;Kim, Tae-Jun;Lee, Chung-Heon;Yoo, Seok Bong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.6
    • /
    • pp.774-784
    • /
    • 2021
  • The object detection and recognition process is a very important task in the field of computer vision, and related research is actively being conducted. However, in the actual object recognition process, the recognition accuracy is often degraded due to the resolution mismatch between the training image data and the test image data. To solve this problem, in this paper, we designed and developed an integrated object recognition and super-resolution framework by proposing an image super-resolution technique to improve object recognition accuracy. In detail, 11,231 license plate training images were built by ourselves through web-crawling and artificial-data-generation, and the image super-resolution artificial neural network was trained by defining an objective function to be robust to the image flip. To verify the performance of the proposed algorithm, we experimented with the trained image super-resolution and recognition on 1,999 test images, and it was confirmed that the proposed super-resolution technique has the effect of improving the accuracy of character recognition.

Deep Learning Description Language for Referring to Analysis Model Based on Trusted Deep Learning (신뢰성있는 딥러닝 기반 분석 모델을 참조하기 위한 딥러닝 기술 언어)

  • Mun, Jong Hyeok;Kim, Do Hyung;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.133-142
    • /
    • 2021
  • With the recent advancements of deep learning, companies such as smart home, healthcare, and intelligent transportation systems are utilizing its functionality to provide high-quality services for vehicle detection, emergency situation detection, and controlling energy consumption. To provide reliable services in such sensitive systems, deep learning models are required to have high accuracy. In order to develop a deep learning model for analyzing previously mentioned services, developers should utilize the state of the art deep learning models that have already been verified for higher accuracy. The developers can verify the accuracy of the referenced model by validating the model on the dataset. For this validation, the developer needs structural information to document and apply deep learning models, including metadata such as learning dataset, network architecture, and development environments. In this paper, we propose a description language that represents the network architecture of the deep learning model along with its metadata that are necessary to develop a deep learning model. Through the proposed description language, developers can easily verify the accuracy of the referenced deep learning model. Our experiments demonstrate the application scenario of a deep learning description document that focuses on the license plate recognition for the detection of illegally parked vehicles.