• Title/Summary/Keyword: YOLOv5 Model

Search Result 95, Processing Time 0.029 seconds

Development of an abnormal road object recognition model based on deep learning (딥러닝 기반 불량노면 객체 인식 모델 개발)

  • Choi, Mi-Hyeong;Woo, Je-Seung;Hong, Sun-Gi;Park, Jun-Mo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.4
    • /
    • pp.149-155
    • /
    • 2021
  • In this study, we intend to develop a defective road surface object recognition model that automatically detects road surface defects that restrict the movement of the transportation handicapped using electric mobile devices with deep learning. For this purpose, road surface information was collected from the pedestrian and running routes where the electric mobility aid device is expected to move in five areas within the city of Busan. For data, images were collected by dividing the road surface and surroundings into objects constituting the surroundings. A series of recognition items such as the detection of breakage levels of sidewalk blocks were defined by classifying according to the degree of impeding the movement of the transportation handicapped in traffic from the collected data. A road surface object recognition deep learning model was implemented. In the final stage of the study, the performance verification process of a deep learning model that automatically detects defective road surface objects through model learning and validation after processing, refining, and annotation of image data separated and collected in units of objects through actual driving. proceeded.

Estimation of fruit number of apple tree based on YOLOv5 and regression model (YOLOv5 및 다항 회귀 모델을 활용한 사과나무의 착과량 예측 방법)

  • Hee-Jin Gwak;Yunju Jeong;Ik-Jo Chun;Cheol-Hee Lee
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.150-157
    • /
    • 2024
  • In this paper, we propose a novel algorithm for predicting the number of apples on an apple tree using a deep learning-based object detection model and a polynomial regression model. Measuring the number of apples on an apple tree can be used to predict apple yield and to assess losses for determining agricultural disaster insurance payouts. To measure apple fruit load, we photographed the front and back sides of apple trees. We manually labeled the apples in the captured images to construct a dataset, which was then used to train a one-stage object detection CNN model. However, when apples on an apple tree are obscured by leaves, branches, or other parts of the tree, they may not be captured in images. Consequently, it becomes difficult for image recognition-based deep learning models to detect or infer the presence of these apples. To address this issue, we propose a two-stage inference process. In the first stage, we utilize an image-based deep learning model to count the number of apples in photos taken from both sides of the apple tree. In the second stage, we conduct a polynomial regression analysis, using the total apple count from the deep learning model as the independent variable, and the actual number of apples manually counted during an on-site visit to the orchard as the dependent variable. The performance evaluation of the two-stage inference system proposed in this paper showed an average accuracy of 90.98% in counting the number of apples on each apple tree. Therefore, the proposed method can significantly reduce the time and cost associated with manually counting apples. Furthermore, this approach has the potential to be widely adopted as a new foundational technology for fruit load estimation in related fields using deep learning.

Deep Learning based violent protest detection system

  • Lee, Yeon-su;Kim, Hyun-chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.87-93
    • /
    • 2019
  • In this paper, we propose a real-time drone-based violent protest detection system. Our proposed system uses drones to detect scenes of violent protest in real-time. The important problem is that the victims and violent actions have to be manually searched in videos when the evidence has been collected. Firstly, we focused to solve the limitations of existing collecting evidence devices by using drone to collect evidence live and upload in AWS(Amazon Web Service)[1]. Secondly, we built a Deep Learning based violence detection model from the videos using Yolov3 Feature Pyramid Network for human activity recognition, in order to detect three types of violent action. The built model classifies people with possession of gun, swinging pipe, and violent activity with the accuracy of 92, 91 and 80.5% respectively. This system is expected to significantly save time and human resource of the existing collecting evidence.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

A Study on Image Labeling Technique for Deep-Learning-Based Multinational Tanks Detection Model

  • Kim, Taehoon;Lim, Dongkyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.58-63
    • /
    • 2022
  • Recently, the improvement of computational processing ability due to the rapid development of computing technology has greatly advanced the field of artificial intelligence, and research to apply it in various domains is active. In particular, in the national defense field, attention is paid to intelligent recognition among machine learning techniques, and efforts are being made to develop object identification and monitoring systems using artificial intelligence. To this end, various image processing technologies and object identification algorithms are applied to create a model that can identify friendly and enemy weapon systems and personnel in real-time. In this paper, we conducted image processing and object identification focused on tanks among various weapon systems. We initially conducted processing the tanks' image using a convolutional neural network, a deep learning technique. The feature map was examined and the important characteristics of the tanks crucial for learning were derived. Then, using YOLOv5 Network, a CNN-based object detection network, a model trained by labeling the entire tank and a model trained by labeling only the turret of the tank were created and the results were compared. The model and labeling technique we proposed in this paper can more accurately identify the type of tank and contribute to the intelligent recognition system to be developed in the future.

Research on the Lesion Classification by Radiomics in Laryngoscopy Image (후두내시경 영상에서의 라디오믹스에 의한 병변 분류 연구)

  • Park, Jun Ha;Kim, Young Jae;Woo, Joo Hyun;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.353-360
    • /
    • 2022
  • Laryngeal disease harms quality of life, and laryngoscopy is critical in identifying causative lesions. This study extracts and analyzes using radiomics quantitative features from the lesion in laryngoscopy images and will fit and validate a classifier for finding meaningful features. Searching the region of interest for lesions not classified by the YOLOv5 model, features are extracted with radionics. Selected the extracted features are through a combination of three feature selectors, and three estimator models. Through the selected features, trained and verified two classification models, Random Forest and Gradient Boosting, and found meaningful features. The combination of SFS, LASSO, and RF shows the highest performance with an accuracy of 0.90 and AUROC 0.96. Model using features to select by SFM, or RIDGE was low lower performance than other things. Classification of larynx lesions through radiomics looks effective. But it should use various feature selection methods and minimize data loss as losing color data.

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Preliminary Study for Vision A.I-based Automated Quality Supervision Technique of Exterior Insulation and Finishing System - Focusing on Form Bonding Method - (인공지능 영상인식 기반 외단열 공법 품질감리 자동화 기술 기초연구 - 단열재 습식 부착방법을 중심으로 -)

  • Yoon, Sebeen;Lee, Byoungmin;Lee, Changsu;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.133-134
    • /
    • 2022
  • This study proposed vision artificial intelligence-based automated supervision technology for external insulation and finishing system, and basic research was conducted for it. The automated supervision technology proposed in this study consists of the object detection model (YOLOv5) and the part that derives necessary information based on the object detection result and then determines whether the external insulation-related adhesion regulations are complied with. As a result of a test, the judgement accuracy of the proposed model showed about 70%. The results of this study are expected to contribute to securing the external insulation quality and further contributing to the realization of energy-saving eco-friendly buildings. As further research, it is necessary to develop a technology that can improve the accuracy of the object detection model by supplementing the number of data for model training and determine additional related regulations such as the adhesive area ratio.

  • PDF

A Study on the Artificial Intelligence-Based Soybean Growth Analysis Method (인공지능 기반 콩 생장분석 방법 연구)

  • Moon-Seok Jeon;Yeongtae Kim;Yuseok Jeong;Hyojun Bae;Chaewon Lee;Song Lim Kim;Inchan Choi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.1-14
    • /
    • 2023
  • Soybeans are one of the world's top five staple crops and a major source of plant-based protein. Due to their susceptibility to climate change, which can significantly impact grain production, the National Agricultural Science Institute is conducting research on crop phenotypes through growth analysis of various soybean varieties. While the process of capturing growth progression photos of soybeans is automated, the verification, recording, and analysis of growth stages are currently done manually. In this paper, we designed and trained a YOLOv5s model to detect soybean leaf objects from image data of soybean plants and a Convolution Neural Network (CNN) model to judgement the unfolding status of the detected soybean leaves. We combined these two models and implemented an algorithm that distinguishes layers based on the coordinates of detected soybean leaves. As a result, we developed a program that takes time-series data of soybeans as input and performs growth analysis. The program can accurately determine the growth stages of soybeans up to the second or third compound leaves.

A Study on Image Preprocessing Methods for Automatic Detection of Ship Corrosion Based on Deep Learning (딥러닝 기반 선박 부식 자동 검출을 위한 이미지 전처리 방안 연구)

  • Yun, Gwang-ho;Oh, Sang-jin;Shin, Sung-chul
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.4_2
    • /
    • pp.573-586
    • /
    • 2022
  • Corrosion can cause dangerous and expensive damage and failures of ship hulls and equipment. Therefore, it is necessary to maintain the vessel by periodic corrosion inspections. During visual inspection, many corrosion locations are inaccessible for many reasons, especially safety's point of view. Including subjective decisions of inspectors is one of the issues of visual inspection. Automation of visual inspection is tried by many pieces of research. In this study, we propose image preprocessing methods by image patch segmentation and thresholding. YOLOv5 was used as an object detection model after the image preprocessing. Finally, it was evaluated that corrosion detection performance using the proposed method was improved in terms of mean average precision.