• Title/Summary/Keyword: YOLOv2

Search Result 75, Processing Time 0.027 seconds

Detection of Active Fire Objects from Drone Images Using YOLOv7x Model (드론영상과 YOLOv7x 모델을 이용한 활성산불 객체탐지)

  • Park, Ganghyun;Kang, Jonggu;Choi, Soyeon;Youn, Youjeong;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1737-1741
    • /
    • 2022
  • Active fire monitoring using high-resolution drone images and deep learning technologies is now an initial stage and requires various approaches for research and development. This letter examined the detection of active fire objects using You Look Only Once Version 7 (YOLOv7), a state-of-the-art (SOTA) model that has rarely been used in fire detection with drone images. Our experiments showed a better performance than the previous works in terms of multiple quantitative measures. The proposed method can be applied to continuous monitoring of wide areas, with an integration of additional development of new technologies.

High-Resolution Mapping Techniques for Coastal Debris Using YOLOv8 and Unmanned Aerial Vehicle (YOLOv8과 무인항공기를 활용한 고해상도 해안쓰레기 매핑)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.151-166
    • /
    • 2024
  • Coastal debris presents a significant environmental threat globally. This research sought to improve the monitoring methods for coastal debris by employing deep learning and remote sensing technologies. To achieve this, an object detection approach utilizing the You Only Look Once (YOLO)v8 model was implemented to develop a comprehensive image dataset for 11 primary types of coastal debris in our country, proposing a protocol for the real-time detection and analysis of debris. Drone imagery was collected over Sinja Island, situated at the estuary of the Nakdong River, and analyzed using our custom YOLOv8-based analysis program to identify type-specific hotspots of coastal debris. The deployment of these mapping and analysis methodologies is anticipated to be effectively utilized in managing coastal debris.

Computer Vision-Based Car Accident Detection using YOLOv8 (YOLO v8을 활용한 컴퓨터 비전 기반 교통사고 탐지)

  • Marwa Chacha Andrea;Choong Kwon Lee;Yang Sok Kim;Mi Jin Noh;Sang Il Moon;Jae Ho Shin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.91-105
    • /
    • 2024
  • Car accidents occur as a result of collisions between vehicles, leading to both vehicle damage and personal and material losses. This study developed a vehicle accident detection model based on 2,550 image frames extracted from car accident videos uploaded to YouTube, captured by CCTV. To preprocess the data, bounding boxes were annotated using roboflow.com, and the dataset was augmented by flipping images at various angles. The You Only Look Once version 8 (YOLOv8) model was employed for training, achieving an average accuracy of 0.954 in accident detection. The proposed model holds practical significance by facilitating prompt alarm transmission in emergency situations. Furthermore, it contributes to the research on developing an effective and efficient mechanism for vehicle accident detection, which can be utilized on devices like smartphones. Future research aims to refine the detection capabilities by integrating additional data including sound.

YOLOv5 based Anomaly Detection for Subway Safety Management Using Dilated Convolution

  • Nusrat Jahan Tahira;Ju-Ryong Park;Seung-Jin Lim;Jang-Sik Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_1
    • /
    • pp.217-223
    • /
    • 2023
  • With the rapid advancement of technologies, need for different research fields where this technology can be used is also increasing. One of the most researched topic in computer vision is object detection, which has widely been implemented in various fields which include healthcare, video surveillance and education. The main goal of object detection is to identify and categorize all the objects in a target environment. Specifically, methods of object detection consist of a variety of significant techniq ues, such as image processing and patterns recognition. Anomaly detection is a part of object detection, anomalies can be found various scenarios for example crowded places such as subway stations. An abnormal event can be assumed as a variation from the conventional scene. Since the abnormal event does not occur frequently, the distribution of normal and abnormal events is thoroughly imbalanced. In terms of public safety, abnormal events should be avoided and therefore immediate action need to be taken. When abnormal events occur in certain places, real time detection is required to prevent and protect the safety of the people. To solve the above problems, we propose a modified YOLOv5 object detection algorithm by implementing dilated convolutional layers which achieved 97% mAP50 compared to other five different models of YOLOv5. In addition to this, we also created a simple mobile application to avail the abnormal event detection on mobile phones.

A Study on Biomass Estimation Technique of Invertebrate Grazers Using Multi-object Tracking Model Based on Deep Learning (딥러닝 기반 다중 객체 추적 모델을 활용한 조식성 무척추동물 현존량 추정 기법 연구)

  • Bak, Suho;Kim, Heung-Min;Lee, Heeone;Han, Jeong-Ik;Kim, Tak-Young;Lim, Jae-Young;Jang, Seon Woong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.237-250
    • /
    • 2022
  • In this study, we propose a method to estimate the biomass of invertebrate grazers from the videos with underwater drones by using a multi-object tracking model based on deep learning. In order to detect invertebrate grazers by classes, we used YOLOv5 (You Only Look Once version 5). For biomass estimation we used DeepSORT (Deep Simple Online and real-time tracking). The performance of each model was evaluated on a workstation with a GPU accelerator. YOLOv5 averaged 0.9 or more mean Average Precision (mAP), and we confirmed it shows about 59 fps at 4 k resolution when using YOLOv5s model and DeepSORT algorithm. Applying the proposed method in the field, there was a tendency to be overestimated by about 28%, but it was confirmed that the level of error was low compared to the biomass estimation using object detection model only. A follow-up study is needed to improve the accuracy for the cases where frame images go out of focus continuously or underwater drones turn rapidly. However,should these issues be improved, it can be utilized in the production of decision support data in the field of invertebrate grazers control and monitoring in the future.

A Research on Cylindrical Pill Bottle Recognition with YOLOv8 and ORB

  • Dae-Hyun Kim;Hyo Hyun Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.13-20
    • /
    • 2024
  • This paper introduces a method for generating model images that can identify specific cylindrical medicine containers in videos and investigates data collection techniques. Previous research had separated object detection from specific object recognition, making it challenging to apply automated image stitching. A significant issue was that the coordinate-based object detection method included extraneous information from outside the object area during the image stitching process. To overcome these challenges, this study applies the newly released YOLOv8 (You Only Look Once) segmentation technique to vertically rotating pill bottles video and employs the ORB (Oriented FAST and Rotated BRIEF) feature matching algorithm to automate model image generation. The research findings demonstrate that applying segmentation techniques improves recognition accuracy when identifying specific pill bottles. The model images created with the feature matching algorithm could accurately identify the specific pill bottles.

Ship Detection from SAR Images Using YOLO: Model Constructions and Accuracy Characteristics According to Polarization (YOLO를 이용한 SAR 영상의 선박 객체 탐지: 편파별 모델 구성과 정확도 특성 분석)

  • Yungyo Im;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Youngmin Seo;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.997-1008
    • /
    • 2023
  • Ship detection at sea can be performed in various ways. In particular, satellites can provide wide-area surveillance, and Synthetic Aperture Radar (SAR) imagery can be utilized day and night and in all weather conditions. To propose an efficient ship detection method from SAR images, this study aimed to apply the You Only Look Once Version 5 (YOLOv5) model to Sentinel-1 images and to analyze the difference between individual vs. integrated models and the accuracy characteristics by polarization. YOLOv5s, which has fewer and lighter parameters, and YOLOv5x, which has more parameters but higher accuracy, were used for the performance tests (1) by dividing each polarization into HH, HV, VH, and VV, and (2) by using images from all polarizations. All four experiments showed very similar and high accuracy of 0.977 ≤ AP@0.5 ≤ 0.998. This result suggests that the polarization integration model using lightweight YOLO models can be the most effective in terms of real-time system deployment. 19,582 images were used in this experiment. However, if other SAR images,such as Capella and ICEYE, are included in addition to Sentinel-1 images, a more flexible and accurate model for ship detection can be built.

Pine Wilt Disease Detection Based on Deep Learning Using an Unmanned Aerial Vehicle (무인항공기를 이용한 딥러닝 기반의 소나무재선충병 감염목 탐지)

  • Lim, Eon Taek;Do, Myung Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.3
    • /
    • pp.317-325
    • /
    • 2021
  • Pine wilt disease first appeared in Busan in 1998; it is a serious disease that causes enormous damage to pine trees. The Korean government enacted a special law on the control of pine wilt disease in 2005, which controls and prohibits the movement of pine trees in affected areas. However, existing forecasting and control methods have physical and economic challenges in reducing pine wilt disease that occurs simultaneously and radically in mountainous terrain. In this study, the authors present the use of a deep learning object recognition and prediction method based on visual materials using an unmanned aerial vehicle (UAV) to effectively detect trees suspected of being infected with pine wilt disease. In order to observe pine wilt disease, an orthomosaic was produced using image data acquired through aerial shots. As a result, 198 damaged trees were identified, while 84 damaged trees were identified in field surveys that excluded areas with inaccessible steep slopes and cliffs. Analysis using image segmentation (SegNet) and image detection (YOLOv2) obtained a performance value of 0.57 and 0.77, respectively.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.