• Title/Summary/Keyword: YOLO(You Only Look Once

Search Result 94, Processing Time 0.023 seconds

Currency Recognition System for Blind People (시각장애인을 위한 화폐 인식 시스템)

  • Dong-Jun Yoo;Sung-Jun Kim;Jun-Yeong Lee;Hyeon-Su Kang;Jun-Ho Son;Se-Jin Oh
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.257-258
    • /
    • 2024
  • 현재 시각장애인들이 현금을 사용하게 될 시 지폐가 얼마인지 확인할 방법이 없어 불편을 겪거나 금전적 사기를 당할 위험이 잦다. 한국은행에서는 이러한 사고를 막기 위해 점자 지폐를 만들어 발부하고 있지만 시각장애인 91%가 식별하지 못해 많은 불편을 겪고 있다. 본 논문에서는 딥러닝을 활용하여 화폐를 인식하고 TTS 기술을 사용하여 지폐의 값이 얼마인지 소리로 알려주는 시스템을 개발하였다. 지폐 인식을 위해 데이터를 직접 수집하여 YOLOv5 알고리즘을 활용하여 학습시킨 Weights 파일을 사용하였다. 이를 활용하여 시각장애인들은 더 안전하게 현금을 사용하고, 금전적인 문제를 예방할 수 있다.

  • PDF

Research on Digital Construction Site Management Using Drone and Vision Processing Technology (드론 및 비전 프로세싱 기술을 활용한 디지털 건설현장 관리에 대한 연구)

  • Seo, Min Jo;Park, Kyung Kyu;Lee, Seung Been;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyeung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.239-240
    • /
    • 2023
  • Construction site management involves overseeing tasks from the construction phase to the maintenance stage, and digitalization of construction sites is necessary for digital construction site management. In this study, we aim to conduct research on object recognition at construction sites using drones. Images of construction sites captured by drones are reconstructed into BIM (Building Information Modeling) models, and objects are recognized after partially rendering the models using artificial intelligence. For the photorealistic rendering of the BIM models, both traditional filtering techniques and the generative adversarial network (GAN) model were used, while the YOLO (You Only Look Once) model was employed for object recognition. This study is expected to provide insights into the research direction of digital construction site management and help assess the potential and future value of introducing artificial intelligence in the construction industry.

  • PDF

A Study on the Detection of Solar Power Plant for High-Resolution Aerial Imagery Using YOLO v2 (YOLO v2를 이용한 고해상도 항공영상에서의 태양광발전소 탐지 방법 연구)

  • Kim, Hayoung;Na, Ra;Joo, Donghyuk;Choi, Gyuhoon;Oh, Yun-Gyeong
    • Journal of Korean Society of Rural Planning
    • /
    • v.28 no.2
    • /
    • pp.87-96
    • /
    • 2022
  • As part of strengthening energy security and responding to climate change, the government has promoted various renewable energy measures to increase the development of renewable energy facilities. As a result, small-scale solar installations in rural areas have increased rapidly. The number of complaints from local residents is increasing. Therefore, in this study, deep learning technology is applied to high-resolution aerial images on the internet to detect solar power plants installed in rural areas to determine whether or not solar power plants are installed. Specifically, I examined the solar facility detector generated by training the YOLO(You Only Look Once) v2 object detector and looked at its usability. As a result, about 800 pieces of training data showed a high object detection rate of 93%. By constructing such an object detection model, it is expected that it can be utilized for land use monitoring in rural areas, and it can be utilized as a spatial data construction plan for rural areas using technology for detecting small-scale agricultural facilities.

Study of a underpass inundation forecast using object detection model (객체탐지 모델을 활용한 지하차도 침수 예측 연구)

  • Oh, Byunghwa;Hwang, Seok Hwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.302-302
    • /
    • 2021
  • 지하차도의 경우 국지 및 돌발홍수가 발생할 경우 대부분 침수됨에도 불구하고 2020년 7월 23일 부산 지역에 밤사이 시간당 80mm가 넘는 폭우가 발생하면서 순식간에 지하차도 천장까지 물이 차면서 선제적인 차량 통제가 우선적으로 수행되지 못하여 미처 대피하지 못한 3명의 운전자 인명사고가 발생하였다. 수재해를 비롯한 재난 관리를 빠르게 수행하기 위해서는 기존의 정부 및 관주도 중심의 단방향의 재난 대응에서 벗어나 정형 데이터와 비정형 데이터를 총칭하는 빅데이터의 통합적 수집 및 분석을 수행이 필요하다. 본 연구에서는 부산지역의 지하차도와 인접한 지하터널 CCTV 자료(센서)를 통한 재난 발생 시 인명피해를 최소화 정보 제공을 위한 Object Detection(객체 탐지)연구를 수행하였다. 지하터널 침수가 발생한 부산지역의 CCTV 영상을 사용하였으며, 영상편집에 사용되는 CCTV 자료의 음성자료를 제거하는 인코딩을 통하여 불러오는 영상파일 용량파일 감소 효과를 볼 수 있었다. 지하차도에 진입하는 물체를 탐지하는 방법으로 YOLO(You Only Look Once)를 사용하였으며, YOLO는 가장 빠른 객체 탐지 알고리즘 중 하나이며 최신 GPU에서 초당 170프레임의 속도로 실행될 수 있는 YOLOv3 방법을 적용하였으며, 분류작업에서 보다 높은 Classification을 가지는 Darknet-53을 적용하였다. YOLOv3 방법은 기존 객체탐지 모델 보다 좀 더 빠르고 정확한 물체 탐지가 가능하며 또한 모델의 크기를 변경하기만 하면 다시 학습시키지 않아도 속도와 정확도를 쉽게 변경가능한 장점이 있다. CCTV에서 오전(일반), 오후(침수발생) 시점을 나눈 후 Car, Bus, Truck, 사람을 분류하는 YOLO 알고리즘을 적용하여 지하터널 인근 Object Detection을 실제 수행 하였으며, CCTV자료를 이용하여 실제 물체 탐지의 정확도가 높은 것을 확인하였다.

  • PDF

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

Detection and Grading of Compost Heap Using UAV and Deep Learning (UAV와 딥러닝을 활용한 야적퇴비 탐지 및 관리등급 산정)

  • Miso Park;Heung-Min Kim;Youngmin Kim;Suho Bak;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.33-43
    • /
    • 2024
  • This research assessed the applicability of the You Only Look Once (YOLO)v8 and DeepLabv3+ models for the effective detection of compost heaps, identified as a significant source of non-point source pollution. Utilizing high-resolution imagery acquired through Unmanned Aerial Vehicles(UAVs), the study conducted a comprehensive comparison and analysis of the quantitative and qualitative performances. In the quantitative evaluation, the YOLOv8 model demonstrated superior performance across various metrics, particularly in its ability to accurately distinguish the presence or absence of covers on compost heaps. These outcomes imply that the YOLOv8 model is highly effective in the precise detection and classification of compost heaps, thereby providing a novel approach for assessing the management grades of compost heaps and contributing to non-point source pollution management. This study suggests that utilizing UAVs and deep learning technologies for detecting and managing compost heaps can address the constraints linked to traditional field survey methods, thereby facilitating the establishment of accurate and effective non-point source pollution management strategies, and contributing to the safeguarding of aquatic environments.

Pig Image Learning for Improving Weight Measurement Accuracy

  • Jonghee Lee;Seonwoo Park;Gipou Nam;Jinwook Jang;Sungho Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.33-40
    • /
    • 2024
  • The live weight of livestock is important information for managing their health and housing conditions, and it can be used to determine the optimal amount of feed and the timing of shipment. In general, it takes a lot of human resources and time to weigh livestock using a scale, and it is not easy to measure each stage of growth, which prevents effective breeding methods such as feeding amount control from being applied. In this paper, we aims to improve the accuracy of weight measurement of piglets, weaned pigs, nursery pigs, and fattening pigs by collecting, analyzing, learning, and predicting video and image data in animal husbandry and pig farming. For this purpose, we trained using Pytorch, YOLO(you only look once) 5 model, and Scikit Learn library and found that the actual and prediction graphs showed a similar flow with a of RMSE(root mean square error) 0.4%. and MAPE(mean absolute percentage error) 0.2%. It can be utilized in the mammalian pig, weaning pig, nursery pig, and fattening pig sections. The accuracy is expected to be continuously improved based on variously trained image and video data and actual measured weight data. It is expected that efficient breeding management will be possible by predicting the production of pigs by part through video reading in the future.

A Study on Fire Detection in Ship Engine Rooms Using Convolutional Neural Network (합성곱 신경망을 이용한 선박 기관실에서의 화재 검출에 관한 연구)

  • Park, Kyung-Min;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.476-481
    • /
    • 2019
  • Early detection of fire is an important measure for minimizing the loss of life and property damage. However, fire and smoke need to be simultaneously detected. In this context, numerous studies have been conducted on image-based fire detection. Conventional fire detection methods are compute-intensive and comprise several algorithms for extracting the flame and smoke characteristics. Hence, deep learning algorithms and convolution neural networks can be alternatively employed for fire detection. In this study, recorded image data of fire in a ship engine room were analyzed. The flame and smoke characteristics were extracted from the outer box, and the YOLO (You Only Look Once) convolutional neural network algorithm was subsequently employed for learning and testing. Experimental results were evaluated with respect to three attributes, namely detection rate, error rate, and accuracy. The respective values of detection rate, error rate, and accuracy are found to be 0.994, 0.011, and 0.998 for the flame, 0.978, 0.021, and 0.978 for the smoke, and the calculation time is found to be 0.009 s.

Realtime Detection of Benthic Marine Invertebrates from Underwater Images: A Comparison betweenYOLO and Transformer Models (수중영상을 이용한 저서성 해양무척추동물의 실시간 객체 탐지: YOLO 모델과 Transformer 모델의 비교평가)

  • Ganghyun Park;Suho Bak;Seonwoong Jang;Shinwoo Gong;Jiwoo Kwak;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.909-919
    • /
    • 2023
  • Benthic marine invertebrates, the invertebrates living on the bottom of the ocean, are an essential component of the marine ecosystem, but excessive reproduction of invertebrate grazers or pirate creatures can cause damage to the coastal fishery ecosystem. In this study, we compared and evaluated You Only Look Once Version 7 (YOLOv7), the most widely used deep learning model for real-time object detection, and detection tansformer (DETR), a transformer-based model, using underwater images for benthic marine invertebratesin the coasts of South Korea. YOLOv7 showed a mean average precision at 0.5 (mAP@0.5) of 0.899, and DETR showed an mAP@0.5 of 0.862, which implies that YOLOv7 is more appropriate for object detection of various sizes. This is because YOLOv7 generates the bounding boxes at multiple scales that can help detect small objects. Both models had a processing speed of more than 30 frames persecond (FPS),so it is expected that real-time object detection from the images provided by divers and underwater drones will be possible. The proposed method can be used to prevent and restore damage to coastal fisheries ecosystems, such as rescuing invertebrate grazers and creating sea forests to prevent ocean desertification.

A Study on detection of missing person using DRONE and AI (드론과 인공지능을 활용한 실종자 탐색에 관한 연구)

  • Kyoung-Mok Kim;Ho-beom Jeon;Geon-Seon Lim
    • Journal of the Health Care and Life Science
    • /
    • v.10 no.2
    • /
    • pp.361-367
    • /
    • 2022
  • This study provides several methods to minimize dead zone and to detect missing person using combined DRONE and AI especially called 4 th Industrial Revolution. That is composed of image acquisition for a person who is in needed of support. The procedure is DRONE that is made of image acquisition and transfer system. after that can be shown GPS information. Currently representative AI algorithm is YOLO (You Only Look Once) that can be adopted to find manikin or real image by learning with dataset. The output was reached in reliable and efficient results. As the trends of DRONE is expanded widely that will provide various roll. This paper was composed of three parts. the first is DRONE specification, the second is the definition of AI and procedures, the third is the methods of image acquisition using DRONE, the last is the future of DRONE with AI.