• Title/Summary/Keyword: YOLOv5s Network

Search Result 13, Processing Time 0.023 seconds

Development of YOLOv5s and DeepSORT Mixed Neural Network to Improve Fire Detection Performance

  • Jong-Hyun Lee;Sang-Hyun Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.1
    • /
    • pp.320-324
    • /
    • 2023
  • As urbanization accelerates and facilities that use energy increase, human life and property damage due to fire is increasing. Therefore, a fire monitoring system capable of quickly detecting a fire is required to reduce economic loss and human damage caused by a fire. In this study, we aim to develop an improved artificial intelligence model that can increase the accuracy of low fire alarms by mixing DeepSORT, which has strengths in object tracking, with the YOLOv5s model. In order to develop a fire detection model that is faster and more accurate than the existing artificial intelligence model, DeepSORT, a technology that complements and extends SORT as one of the most widely used frameworks for object tracking and YOLOv5s model, was selected and a mixed model was used and compared with the YOLOv5s model. As the final research result of this paper, the accuracy of YOLOv5s model was 96.3% and the number of frames per second was 30, and the YOLOv5s_DeepSORT mixed model was 0.9% higher in accuracy than YOLOv5s with an accuracy of 97.2% and number of frames per second: 30.

Comparison of Deep Learning Networks in Voice-Guided System for The Blind (시각장애인을 위한 음성안내 네비게이션 시스템의 심층신경망 성능 비교)

  • An, Ryun-Hui;Um, Sung-Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.175-177
    • /
    • 2022
  • This paper introduces a system that assists the blind to move to their destination, and compares the performance of 3-types of deep learning network (DNN) used in the system. The system is made up with a smartphone application that finds route from current location to destination using GPS and navigation API and a bus station installation module that recognizes and informs the bus (type and number) being about the board at bus stop using 3-types of DNN and bus information API. To make the module recognize bus number to get on, We adopted faster-RCNN, YOLOv4, YOLOv5s and YOLOv5s showed best performance in accuracy and speed.

  • PDF

Influence of Self-driving Data Set Partition on Detection Performance Using YOLOv4 Network (YOLOv4 네트워크를 이용한 자동운전 데이터 분할이 검출성능에 미치는 영향)

  • Wang, Xufei;Chen, Le;Li, Qiutan;Son, Jinku;Ding, Xilong;Song, Jeongyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.157-165
    • /
    • 2020
  • Aiming at the development of neural network and self-driving data set, it is also an idea to improve the performance of network model to detect moving objects by dividing the data set. In Darknet network framework, the YOLOv4 (You Only Look Once v4) network model was used to train and test Udacity data set. According to 7 proportions of the Udacity data set, it was divided into three subsets including training set, validation set and test set. K-means++ algorithm was used to conduct dimensional clustering of object boxes in 7 groups. By adjusting the super parameters of YOLOv4 network for training, Optimal model parameters for 7 groups were obtained respectively. These model parameters were used to detect and compare 7 test sets respectively. The experimental results showed that YOLOv4 can effectively detect the large, medium and small moving objects represented by Truck, Car and Pedestrian in the Udacity data set. When the ratio of training set, validation set and test set is 7:1.5:1.5, the optimal model parameters of the YOLOv4 have highest detection performance. The values show mAP50 reaching 80.89%, mAP75 reaching 47.08%, and the detection speed reaching 10.56 FPS.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

A Lightweight Pedestrian Intrusion Detection and Warning Method for Intelligent Traffic Security

  • Yan, Xinyun;He, Zhengran;Huang, Youxiang;Xu, Xiaohu;Wang, Jie;Zhou, Xiaofeng;Wang, Chishe;Lu, Zhiyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3904-3922
    • /
    • 2022
  • As a research hotspot, pedestrian detection has a wide range of applications in the field of computer vision in recent years. However, current pedestrian detection methods have problems such as insufficient detection accuracy and large models that are not suitable for large-scale deployment. In view of these problems mentioned above, a lightweight pedestrian detection and early warning method using a new model called you only look once (Yolov5) is proposed in this paper, which utilizing advantages of Yolov5s model to achieve accurate and fast pedestrian recognition. In addition, this paper also optimizes the loss function of the batch normalization (BN) layer. After sparsification, pruning and fine-tuning, got a lot of optimization, the size of the model on the edge of the computing power is lower equipment can be deployed. Finally, from the experimental data presented in this paper, under the training of the road pedestrian dataset that we collected and processed independently, the Yolov5s model has certain advantages in terms of precision and other indicators compared with traditional single shot multiBox detector (SSD) model and fast region-convolutional neural network (Fast R-CNN) model. After pruning and lightweight, the size of training model is greatly reduced without a significant reduction in accuracy, and the final precision reaches 87%, while the model size is reduced to 7,723 KB.

Corroded and loosened bolt detection of steel bolted joints based on improved you only look once network and line segment detector

  • Youhao Ni;Jianxiao Mao;Hao Wang;Yuguang Fu;Zhuo Xi
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.23-35
    • /
    • 2023
  • Steel bolted joint is an important part of steel structure, and its damage directly affects the bearing capacity and durability of steel structure. Currently, the existing research mainly focuses on the identification of corroded bolts and corroded bolts respectively, and there are few studies on multiple states. A detection framework of corroded and loosened bolts is proposed in this study, and the innovations can be summarized as follows: (i) Vision Transformer (ViT) is introduced to replace the third and fourth C3 module of you-only-look-once version 5s (YOLOv5s) algorithm, which increases the attention weights of feature channels and the feature extraction capability. (ii) Three states of the steel bolts are considered, including corroded bolt, bolt missing and clean bolt. (iii) Line segment detector (LSD) is introduced for bolt rotation angle calculation, which realizes bolt looseness detection. The improved YOLOv5s model was validated on the dataset, and the mean average precision (mAP) was increased from 0.902 to 0.952. In terms of a lab-scale joint, the performance of the LSD algorithm and the Hough transform was compared from different perspective angles. The error value of bolt loosening angle of the LSD algorithm is controlled within 1.09%, less than 8.91% of the Hough transform. Furthermore, the proposed framework was applied to fullscale joints of a steel bridge in China. Synthetic images of loosened bolts were successfully identified and the multiple states were well detected. Therefore, the proposed framework can be alternative of monitoring steel bolted joints for management department.

Detection and Recognition of Vehicle License Plates using Deep Learning in Video Surveillance

  • Farooq, Muhammad Umer;Ahmed, Saad;Latif, Mustafa;Jawaid, Danish;Khan, Muhammad Zofeen;Khan, Yahya
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.121-126
    • /
    • 2022
  • The number of vehicles has increased exponentially over the past 20 years due to technological advancements. It is becoming almost impossible to manually control and manage the traffic in a city like Karachi. Without license plate recognition, traffic management is impossible. The Framework for License Plate Detection & Recognition to overcome these issues is proposed. License Plate Detection & Recognition is primarily performed in two steps. The first step is to accurately detect the license plate in the given image, and the second step is to successfully read and recognize each character of that license plate. Some of the most common algorithms used in the past are based on colour, texture, edge-detection and template matching. Nowadays, many researchers are proposing methods based on deep learning. This research proposes a framework for License Plate Detection & Recognition using a custom YOLOv5 Object Detector, image segmentation techniques, and Tesseract's optical character recognition OCR. The accuracy of this framework is 0.89.

Application of Deep Learning-based Object Detection and Distance Estimation Algorithms for Driving to Urban Area (도심로 주행을 위한 딥러닝 기반 객체 검출 및 거리 추정 알고리즘 적용)

  • Seo, Juyeong;Park, Manbok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.83-95
    • /
    • 2022
  • This paper proposes a system that performs object detection and distance estimation for application to autonomous vehicles. Object detection is performed by a network that adjusts the split grid to the input image ratio using the characteristics of the recently actively used deep learning model YOLOv4, and is trained to a custom dataset. The distance to the detected object is estimated using a bounding box and homography. As a result of the experiment, the proposed method improved in overall detection performance and processing speed close to real-time. Compared to the existing YOLOv4, the total mAP of the proposed method increased by 4.03%. The accuracy of object recognition such as pedestrians, vehicles, construction sites, and PE drums, which frequently occur when driving to the city center, has been improved. The processing speed is approximately 55 FPS. The average of the distance estimation error was 5.25m in the X coordinate and 0.97m in the Y coordinate.

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.