• Title/Summary/Keyword: You only look once

Search Result 124, Processing Time 0.028 seconds

Wildfire Detection Method based on an Artificial Intelligence using Image and Text Information (이미지와 텍스트 정보를 활용한 인공지능 기반 산불 탐지 방법)

  • Jae-Hyun Jun;Chang-Seob Yun;Yun-Ha Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.5
    • /
    • pp.19-24
    • /
    • 2024
  • Global climate change is causing an increase in natural disasters around the world due to long-term temperature increases and changes in rainfall. Among them, forest fires are becoming increasingly large. South Korea experienced an average of 537 forest fires over a 10-year period (2013-2022), burning 3,560 hectares of forest. That's 1,180 soccer fields(approximately 3 hectares) of forest burning every year. This paper proposed an artificial intelligence based wildfire detection method using image and text information. The performance of the proposed method was compared with YOLOv9-C, RT-DETR-Res50, RT-DETR-L, and YOLO-World-S methods for mAP50, mAP75, and FPS, and it was confirmed that the proposed method has higher performance than other methods. The proposed method was demonstrated as a forest fire detection model of the early forest fire detection system in the Gangwon State, and it is planned to be advanced in the direction of fire detection that can include not only forest areas but also urban areas in the future.

Vehicle Detection in Dense Area Using UAV Aerial Images (무인 항공기를 이용한 밀집영역 자동차 탐지)

  • Seo, Chang-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.693-698
    • /
    • 2018
  • This paper proposes a vehicle detection method for parking areas using unmanned aerial vehicles (UAVs) and using YOLOv2, which is a recent, known, fast, object-detection real-time algorithm. The YOLOv2 convolutional network algorithm can calculate the probability of each class in an entire image with a one-pass evaluation, and can also predict the location of bounding boxes. It has the advantage of very fast, easy, and optimized-at-detection performance, because the object detection process has a single network. The sliding windows methods and region-based convolutional neural network series detection algorithms use a lot of region proposals and take too much calculation time for each class. So these algorithms have a disadvantage in real-time applications. This research uses the YOLOv2 algorithm to overcome the disadvantage that previous algorithms have in real-time processing problems. Using Darknet, OpenCV, and the Compute Unified Device Architecture as open sources for object detection. a deep learning server is used for the learning and detecting process with each car. In the experiment results, the algorithm could detect cars in a dense area using UAVs, and reduced overhead for object detection. It could be applied in real time.

A Study on Fire Detection in Ship Engine Rooms Using Convolutional Neural Network (합성곱 신경망을 이용한 선박 기관실에서의 화재 검출에 관한 연구)

  • Park, Kyung-Min;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.476-481
    • /
    • 2019
  • Early detection of fire is an important measure for minimizing the loss of life and property damage. However, fire and smoke need to be simultaneously detected. In this context, numerous studies have been conducted on image-based fire detection. Conventional fire detection methods are compute-intensive and comprise several algorithms for extracting the flame and smoke characteristics. Hence, deep learning algorithms and convolution neural networks can be alternatively employed for fire detection. In this study, recorded image data of fire in a ship engine room were analyzed. The flame and smoke characteristics were extracted from the outer box, and the YOLO (You Only Look Once) convolutional neural network algorithm was subsequently employed for learning and testing. Experimental results were evaluated with respect to three attributes, namely detection rate, error rate, and accuracy. The respective values of detection rate, error rate, and accuracy are found to be 0.994, 0.011, and 0.998 for the flame, 0.978, 0.021, and 0.978 for the smoke, and the calculation time is found to be 0.009 s.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

A Study on the Detection of Solar Power Plant for High-Resolution Aerial Imagery Using YOLO v2 (YOLO v2를 이용한 고해상도 항공영상에서의 태양광발전소 탐지 방법 연구)

  • Kim, Hayoung;Na, Ra;Joo, Donghyuk;Choi, Gyuhoon;Oh, Yun-Gyeong
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.2
    • /
    • pp.87-96
    • /
    • 2022
  • As part of strengthening energy security and responding to climate change, the government has promoted various renewable energy measures to increase the development of renewable energy facilities. As a result, small-scale solar installations in rural areas have increased rapidly. The number of complaints from local residents is increasing. Therefore, in this study, deep learning technology is applied to high-resolution aerial images on the internet to detect solar power plants installed in rural areas to determine whether or not solar power plants are installed. Specifically, I examined the solar facility detector generated by training the YOLO(You Only Look Once) v2 object detector and looked at its usability. As a result, about 800 pieces of training data showed a high object detection rate of 93%. By constructing such an object detection model, it is expected that it can be utilized for land use monitoring in rural areas, and it can be utilized as a spatial data construction plan for rural areas using technology for detecting small-scale agricultural facilities.

Abnormal behaviour in rock bream (Oplegnathus fasciatus) detected using deep learning-based image analysis

  • Jang, Jun-Chul;Kim, Yeo-Reum;Bak, SuHo;Jang, Seon-Woong;Kim, Jong-Myoung
    • Fisheries and Aquatic Sciences
    • /
    • v.25 no.3
    • /
    • pp.151-157
    • /
    • 2022
  • Various approaches have been applied to transform aquaculture from a manual, labour-intensive industry to one dependent on automation technologies in the era of the fourth industrial revolution. Technologies associated with the monitoring of physical condition have successfully been applied in most aquafarm facilities; however, real-time biological monitoring systems that can observe fish condition and behaviour are still required. In this study, we used a video recorder placed on top of a fish tank to observe the swimming patterns of rock bream (Oplegnathus fasciatus), first one fish alone and then a group of five fish. Rock bream in the video samples were successfully identified using the you-only-look-once v3 algorithm, which is based on the Darknet-53 convolutional neural network. In addition to recordings of swimming behaviour under normal conditions, the swimming patterns of fish under abnormal conditions were recorded on adding an anaesthetic or lowering the salinity. The abnormal conditions led to changes in the velocity of movement (3.8 ± 0.6 cm/s) involving an initial rapid increase in speed (up to 16.5 ± 3.0 cm/s, upon 2-phenoxyethanol treatment) before the fish stopped moving, as well as changing from swimming upright to dying lying on their sides. Machine learning was applied to datasets consisting of normal or abnormal behaviour patterns, to evaluate the fish behaviour. The proposed algorithm showed a high accuracy (98.1%) in discriminating normal and abnormal rock bream behaviour. We conclude that artificial intelligence-based detection of abnormal behaviour can be applied to develop an automatic bio-management system for use in the aquaculture industry.

A Lightweight Pedestrian Intrusion Detection and Warning Method for Intelligent Traffic Security

  • Yan, Xinyun;He, Zhengran;Huang, Youxiang;Xu, Xiaohu;Wang, Jie;Zhou, Xiaofeng;Wang, Chishe;Lu, Zhiyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3904-3922
    • /
    • 2022
  • As a research hotspot, pedestrian detection has a wide range of applications in the field of computer vision in recent years. However, current pedestrian detection methods have problems such as insufficient detection accuracy and large models that are not suitable for large-scale deployment. In view of these problems mentioned above, a lightweight pedestrian detection and early warning method using a new model called you only look once (Yolov5) is proposed in this paper, which utilizing advantages of Yolov5s model to achieve accurate and fast pedestrian recognition. In addition, this paper also optimizes the loss function of the batch normalization (BN) layer. After sparsification, pruning and fine-tuning, got a lot of optimization, the size of the model on the edge of the computing power is lower equipment can be deployed. Finally, from the experimental data presented in this paper, under the training of the road pedestrian dataset that we collected and processed independently, the Yolov5s model has certain advantages in terms of precision and other indicators compared with traditional single shot multiBox detector (SSD) model and fast region-convolutional neural network (Fast R-CNN) model. After pruning and lightweight, the size of training model is greatly reduced without a significant reduction in accuracy, and the final precision reaches 87%, while the model size is reduced to 7,723 KB.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.

Study of a underpass inundation forecast using object detection model (객체탐지 모델을 활용한 지하차도 침수 예측 연구)

  • Oh, Byunghwa;Hwang, Seok Hwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.302-302
    • /
    • 2021
  • 지하차도의 경우 국지 및 돌발홍수가 발생할 경우 대부분 침수됨에도 불구하고 2020년 7월 23일 부산 지역에 밤사이 시간당 80mm가 넘는 폭우가 발생하면서 순식간에 지하차도 천장까지 물이 차면서 선제적인 차량 통제가 우선적으로 수행되지 못하여 미처 대피하지 못한 3명의 운전자 인명사고가 발생하였다. 수재해를 비롯한 재난 관리를 빠르게 수행하기 위해서는 기존의 정부 및 관주도 중심의 단방향의 재난 대응에서 벗어나 정형 데이터와 비정형 데이터를 총칭하는 빅데이터의 통합적 수집 및 분석을 수행이 필요하다. 본 연구에서는 부산지역의 지하차도와 인접한 지하터널 CCTV 자료(센서)를 통한 재난 발생 시 인명피해를 최소화 정보 제공을 위한 Object Detection(객체 탐지)연구를 수행하였다. 지하터널 침수가 발생한 부산지역의 CCTV 영상을 사용하였으며, 영상편집에 사용되는 CCTV 자료의 음성자료를 제거하는 인코딩을 통하여 불러오는 영상파일 용량파일 감소 효과를 볼 수 있었다. 지하차도에 진입하는 물체를 탐지하는 방법으로 YOLO(You Only Look Once)를 사용하였으며, YOLO는 가장 빠른 객체 탐지 알고리즘 중 하나이며 최신 GPU에서 초당 170프레임의 속도로 실행될 수 있는 YOLOv3 방법을 적용하였으며, 분류작업에서 보다 높은 Classification을 가지는 Darknet-53을 적용하였다. YOLOv3 방법은 기존 객체탐지 모델 보다 좀 더 빠르고 정확한 물체 탐지가 가능하며 또한 모델의 크기를 변경하기만 하면 다시 학습시키지 않아도 속도와 정확도를 쉽게 변경가능한 장점이 있다. CCTV에서 오전(일반), 오후(침수발생) 시점을 나눈 후 Car, Bus, Truck, 사람을 분류하는 YOLO 알고리즘을 적용하여 지하터널 인근 Object Detection을 실제 수행 하였으며, CCTV자료를 이용하여 실제 물체 탐지의 정확도가 높은 것을 확인하였다.

  • PDF

Realtime Detection of Benthic Marine Invertebrates from Underwater Images: A Comparison betweenYOLO and Transformer Models (수중영상을 이용한 저서성 해양무척추동물의 실시간 객체 탐지: YOLO 모델과 Transformer 모델의 비교평가)

  • Ganghyun Park;Suho Bak;Seonwoong Jang;Shinwoo Gong;Jiwoo Kwak;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.909-919
    • /
    • 2023
  • Benthic marine invertebrates, the invertebrates living on the bottom of the ocean, are an essential component of the marine ecosystem, but excessive reproduction of invertebrate grazers or pirate creatures can cause damage to the coastal fishery ecosystem. In this study, we compared and evaluated You Only Look Once Version 7 (YOLOv7), the most widely used deep learning model for real-time object detection, and detection tansformer (DETR), a transformer-based model, using underwater images for benthic marine invertebratesin the coasts of South Korea. YOLOv7 showed a mean average precision at 0.5 (mAP@0.5) of 0.899, and DETR showed an mAP@0.5 of 0.862, which implies that YOLOv7 is more appropriate for object detection of various sizes. This is because YOLOv7 generates the bounding boxes at multiple scales that can help detect small objects. Both models had a processing speed of more than 30 frames persecond (FPS),so it is expected that real-time object detection from the images provided by divers and underwater drones will be possible. The proposed method can be used to prevent and restore damage to coastal fisheries ecosystems, such as rescuing invertebrate grazers and creating sea forests to prevent ocean desertification.