• 제목/요약/키워드: Object Detection Deep Learning Model

검색결과 275건 처리시간 0.028초

Detection and Diagnosis of Power Distribution Supply Facilities Using Thermal Images (열화상 이미지를 이용한 배전 설비 검출 및 진단)

  • Kim, Joo-Sik;Choi, Kyu-Nam;Lee, Hyung-Geun;Kang, Sung-Woo
    • Journal of the Korea Safety Management & Science
    • /
    • 제22권1호
    • /
    • pp.1-8
    • /
    • 2020
  • Maintenance of power distribution facilities is a significant subject in the power supplies. Fault caused by deterioration in power distribution facilities may damage the entire power distribution system. However, current methods of diagnosing power distribution facilities have been manually diagnosed by the human inspector, resulting in continuous pole accidents. In order to improve the existing diagnostic methods, a thermal image analysis model is proposed in this work. Using a thermal image technique in diagnosis field is emerging in the various engineering field due to its non-contact, safe, and highly reliable energy detection technology. Deep learning object detection algorithms are trained with thermal images of a power distribution facility in order to automatically analyze its irregular energy status, hereby efficiently preventing fault of the system. The detected object is diagnosed through a thermal intensity area analysis. The proposed model in this work resulted 82% of accuracy of detecting an actual distribution system by analyzing more than 16,000 images of its thermal images.

Deep Learning-Based Defects Detection Method of Expiration Date Printed In Product Package (딥러닝 기반의 제품 포장에 인쇄된 유통기한 결함 검출 방법)

  • Lee, Jong-woon;Jeong, Seung Su;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2021년도 춘계학술대회
    • /
    • pp.463-465
    • /
    • 2021
  • Currently, the inspection method printed on food packages and boxes is to sample only a few products and inspect them with human eyes. Such a sampling inspection has the limitation that only a small number of products can be inspected. Therefore, accurate inspection using a camera is required. This paper proposes a deep learning object recognition technology model, which is an artificial intelligence technology, as a method for detecting the defects of expiration date printed on the product packaging. Using the Faster R-CNN (region convolution neural network) model, the color images, converted gray images, and converted binary images of the printed expiration date are trained and then tested, and each detection rates are compared. The detection performance of expiration date printed on the package by the proposed method showed the same detection performance as that of conventional vision-based inspection system.

  • PDF

Real-time Fault Detection System of a Pneumatic Cylinder Via Deep-learning Model Considering Time-variant Characteristic of Sensor Data (센서 데이터의 시계열 특성을 고려한 딥러닝 모델 기반의 공압 실린더 고장 감지 시스템 구현)

  • Byeong Su Kim;Geun Myeong Song;Min Jeong Lee;Sujeong Baek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • 제47권2호
    • /
    • pp.10-20
    • /
    • 2024
  • In recent automated manufacturing systems, compressed air-based pneumatic cylinders have been widely used for basic perpetration including picking up and moving a target object. They are relatively categorized as small machines, but many linear or rotary cylinders play an important role in discrete manufacturing systems. Therefore, sudden operation stop or interruption due to a fault occurrence in pneumatic cylinders leads to a decrease in repair costs and production and even threatens the safety of workers. In this regard, this study proposed a fault detection technique by developing a time-variant deep learning model from multivariate sensor data analysis for estimating a current health state as four levels. In addition, it aims to establish a real-time fault detection system that allows workers to immediately identify and manage the cylinder's status in either an actual shop floor or a remote management situation. To validate and verify the performance of the proposed system, we collected multivariate sensor signals from a rotary cylinder and it was successful in detecting the health state of the pneumatic cylinder with four severity levels. Furthermore, the optimal sensor location and signal type were analyzed through statistical inferences.

Vehicle Type Classification Model based on Deep Learning for Smart Traffic Control Systems (스마트 교통 단속 시스템을 위한 딥러닝 기반 차종 분류 모델)

  • Kim, Doyeong;Jang, Sungjin;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.469-472
    • /
    • 2022
  • With the recent development of intelligent transportation systems, various technologies applying deep learning technology are being used. To crackdown on illegal vehicles and criminal vehicles driving on the road, a vehicle type classification system capable of accurately determining the type of vehicle is required. This study proposes a vehicle type classification system optimized for mobile traffic control systems using YOLO(You Only Look Once). The system uses a one-stage object detection algorithm YOLOv5 to detect vehicles into six classes: passenger cars, subcompact, compact, and midsize vans, full-size vans, trucks, motorcycles, special vehicles, and construction machinery. About 5,000 pieces of domestic vehicle image data built by the Korea Institute of Science and Technology for the development of artificial intelligence technology were used as learning data. It proposes a lane designation control system that applies a vehicle type classification algorithm capable of recognizing both front and side angles with one camera.

  • PDF

Prerequisite Research for the Development of an End-to-End System for Automatic Tooth Segmentation: A Deep Learning-Based Reference Point Setting Algorithm (자동 치아 분할용 종단 간 시스템 개발을 위한 선결 연구: 딥러닝 기반 기준점 설정 알고리즘)

  • Kyungdeok Seo;Sena Lee;Yongkyu Jin;Sejung Yang
    • Journal of Biomedical Engineering Research
    • /
    • 제44권5호
    • /
    • pp.346-353
    • /
    • 2023
  • In this paper, we propose an innovative approach that leverages deep learning to find optimal reference points for achieving precise tooth segmentation in three-dimensional tooth point cloud data. A dataset consisting of 350 aligned maxillary and mandibular cloud data was used as input, and both end coordinates of individual teeth were used as correct answers. A two-dimensional image was created by projecting the rendered point cloud data along the Z-axis, where an image of individual teeth was created using an object detection algorithm. The proposed algorithm is designed by adding various modules to the Unet model that allow effective learning of a narrow range, and detects both end points of the tooth using the generated tooth image. In the evaluation using DSC, Euclid distance, and MAE as indicators, we achieved superior performance compared to other Unet-based models. In future research, we will develop an algorithm to find the reference point of the point cloud by back-projecting the reference point detected in the image in three dimensions, and based on this, we will develop an algorithm to divide the teeth individually in the point cloud through image processing techniques.

Deep Learning Acoustic Non-line-of-Sight Object Detection (음향신호를 활용한 딥러닝 기반 비가시 영역 객체 탐지)

  • Ui-Hyeon Shin;Kwangsu Kim
    • Journal of Intelligence and Information Systems
    • /
    • 제29권1호
    • /
    • pp.233-247
    • /
    • 2023
  • Recently, research on detecting objects in hidden spaces beyond the direct line-of-sight of observers has received attention. Most studies use optical equipment that utilizes the directional of light, but sound that has both diffraction and directional is also suitable for non-line-of-sight(NLOS) research. In this paper, we propose a novel method of detecting objects in non-line-of-sight (NLOS) areas using acoustic signals in the audible frequency range. We developed a deep learning model that extracts information from the NLOS area by inputting only acoustic signals and predicts the properties and location of hidden objects. Additionally, for the training and evaluation of the deep learning model, we collected data by varying the signal transmission and reception location for a total of 11 objects. We show that the deep learning model demonstrates outstanding performance in detecting objects in the NLOS area using acoustic signals. We observed that the performance decreases as the distance between the signal collection location and the reflecting wall, and the performance improves through the combination of signals collected from multiple locations. Finally, we propose the optimal conditions for detecting objects in the NLOS area using acoustic signals.

Estimation of Bridge Vehicle Loading using CCTV images and Deep Learning (CCTV 영상과 딥러닝을 이용한 교량통행 차량하중 추정)

  • Suk-Kyoung Bae;Wooyoung Jeong;Soohyun Choi;Byunghyun Kim;Soojin Cho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • 제28권3호
    • /
    • pp.10-18
    • /
    • 2024
  • Vehicle loading is one of the main causes of bridge deterioration. Although WiM (Weigh in Motion) can be used to measure vehicle loading on a bridge, it has disadvantage of high installation and maintenance cost due to its contactness. In this study, a non-contact method is proposed to estimate the vehicle loading history of bridges using deep learning and CCTV images. The proposed method recognizes the vehicle type using an object detection deep learning model and estimates the vehicle loading based on the load-based vehicle type classification table developed using the weights of empty vehicles of major domestic vehicle models. Faster R-CNN, an object detection deep learning model, was trained using vehicle images classified by the classification table. The performance of the model is verified using images of CCTVs on actual bridges. Finally, the vehicle loading history of an actual bridge was obtained for a specific time by continuously estimating the vehicle loadings on the bridge using the proposed method.

Object Size Prediction based on Statistics Adaptive Linear Regression for Object Detection (객체 검출을 위한 통계치 적응적인 선형 회귀 기반 객체 크기 예측)

  • Kwon, Yonghye;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • 제26권2호
    • /
    • pp.184-196
    • /
    • 2021
  • This paper proposes statistics adaptive linear regression-based object size prediction method for object detection. YOLOv2 and YOLOv3, which are typical deep learning-based object detection algorithms, designed the last layer of a network using statistics adaptive exponential regression model to predict the size of objects. However, an exponential regression model can propagate a high derivative of a loss function into all parameters in a network because of the property of an exponential function. We propose statistics adaptive linear regression layer to ease the gradient exploding problem of the exponential regression model. The proposed statistics adaptive linear regression model is used in the last layer of the network to predict the size of objects with statistics estimated from training dataset. We newly designed the network based on the YOLOv3tiny and it shows the higher performance compared to YOLOv3 tiny on the UFPR-ALPR dataset.

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • 제27권3호
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

Comparison and Verification of Deep Learning Models for Automatic Recognition of Pills (알약 자동 인식을 위한 딥러닝 모델간 비교 및 검증)

  • Yi, GyeongYun;Kim, YoungJae;Kim, SeongTae;Kim, HyoEun;Kim, KwangGi
    • Journal of Korea Multimedia Society
    • /
    • 제22권3호
    • /
    • pp.349-356
    • /
    • 2019
  • When a prescription change occurs in the hospital depending on a patient's improvement status, pharmacists directly classify manually returned pills which are not taken by a patient. There are hundreds of kinds of pills to classify. Because it is manual, mistakes can occur and which can lead to medical accidents. In this study, we have compared YOLO, Faster R-CNN and RetinaNet to classify and detect pills. The data consisted of 10 classes and used 100 images per class. To evaluate the performance of each model, we used cross-validation. As a result, the YOLO Model had sensitivity of 91.05%, FPs/image of 0.0507. The Faster R-CNN's sensitivity was 99.6% and FPs/image was 0.0089. The RetinaNet showed sensitivity of 98.31% and FPs/image of 0.0119. Faster RCNN showed the best performance among these three models tested. Thus, the most appropriate model for classifying pills among the three models is the Faster R-CNN with the most accurate detection and classification results and a low FP/image.