• Title/Summary/Keyword: 객체검출 모델

Search Result 242, Processing Time 0.035 seconds

Design of YOLO-based Removable System for Pet Monitoring (반려동물 모니터링을 위한 YOLO 기반의 이동식 시스템 설계)

  • Lee, Min-Hye;Kang, Jun-Young;Lim, Soon-Ja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • Recently, as the number of households raising pets increases due to the increase of single households, there is a need for a system for monitoring the status or behavior of pets. There are regional limitations in the monitoring of pets using domestic CCTVs, which requires a large number of CCTVs or restricts the behavior of pets. In this paper, we propose a mobile system for detecting and tracking cats using deep learning to solve the regional limitations of pet monitoring. We use YOLO (You Look Only Once), an object detection neural network model, to learn the characteristics of pets and apply them to Raspberry Pi to track objects detected in an image. We have designed a mobile monitoring system that connects Raspberry Pi and a laptop via wireless LAN and can check the movement and condition of cats in real time.

A Study on Falling Detection of Workers in the Underground Utility Tunnel using Dual Deep Learning Techniques (이중 딥러닝 기법을 활용한 지하공동구 작업자의 쓰러짐 검출 연구)

  • Jeongsoo Kim;Sangmi Park;Changhee Hong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.498-509
    • /
    • 2023
  • Purpose: This paper proposes a method detecting the falling of a maintenance worker in the underground utility tunnel, by applying deep learning techniques using CCTV video, and evaluates the applicability of the proposed method to the worker monitoring of the utility tunnel. Method: Each rule was designed to detect the falling of a maintenance worker by using the inference results from pre-trained YOLOv5 and OpenPose models, respectively. The rules were then integrally applied to detect worker falls within the utility tunnel. Result: Although the worker presence and falling were detected by the proposed model, the inference results were dependent on both the distance between the worker and CCTV and the falling direction of the worker. Additionally, the falling detection system using YOLOv5 shows superior performance, due to its lower dependence on distance and fall direction, compared to the OpenPose-based. Consequently, results from the fall detection using the integrated dual deep learning model were dependent on the YOLOv5 detection performance. Conclusion: The proposed hybrid model shows detecting an abnormal worker in the utility tunnel but the improvement of the model was meaningless compared to the single model based YOLOv5 due to severe differences in detection performance between each deep learning model

A Study on Recognition of Dangerous Behaviors using Privacy Protection Video in Single-person Household Environments

  • Lim, ChaeHyun;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.47-54
    • /
    • 2022
  • Recently, with the development of deep learning technology, research on recognizing human behavior is in progress. In this paper, a study was conducted to recognize risky behaviors that may occur in a single-person household environment using deep learning technology. Due to the nature of single-person households, personal privacy protection is necessary. In this paper, we recognize human dangerous behavior in privacy protection video with Gaussian blur filters for privacy protection of individuals. The dangerous behavior recognition method uses the YOLOv5 model to detect and preprocess human object from video, and then uses it as an input value for the behavior recognition model to recognize dangerous behavior. The experiments used ResNet3D, I3D, and SlowFast models, and the experimental results show that the SlowFast model achieved the highest accuracy of 95.7% in privacy-protected video. Through this, it is possible to recognize human dangerous behavior in a single-person household environment while protecting individual privacy.

Human Behavior Analysis and Remote Emergency Detection System Using the Neural Network (신경망을 이용한 동작분석과 원격 응급상황 검출 시스템)

  • Lee Dong-Gyu;Lee Ki-Jung;Lim Hyuk-Kyu;WhangBo Taeg-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.9
    • /
    • pp.50-59
    • /
    • 2006
  • This paper proposes an automatic video monitoring system and its application to emergency detection by analyzing human behavior using neural network. The object area is identified by subtracting the statistically constructed background image from the input image. The identified object area then is transformed to the feature vector. Neural network has been adapted for analyzing the human behavior using the feature vector, and is designed to classify the behavior in rather simple numerical calculation. The system proposed in this paper is able to classify the three human behavior: stand, faint, and squat. Experiment results shows that the proposed algorithm is very efficient and useful in detecting the emergency situation.

  • PDF

Traffic Collision Detection at Intersections based on Motion Vector and Staying Period of Vehicles (차량의 움직임 벡터와 체류시간 기반의 교차로 추돌 검출)

  • Shin, Youn-Chul;Park, Joo-Heon;Lee, Myeong-Jin
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.1
    • /
    • pp.90-97
    • /
    • 2013
  • Recently, intelligent transportation system based on image processing has been developed. In this paper, we propose a collision detection algorithm based on the analysis of motion vectors and the staying periods of vehicles in intersections. Objects in the region of interest are extracted from the subtraction image between background images based on Gaussian mixture model and input images. Collisions and traffic jams are detected by analysing measured motion vectors of vehicles and their staying periods in intersections. Experiments are performed on video sequences actually recoded at intersections. Correct detection rate and false alarm rate are 85.7% and 7.7%, respectively.

A Study on The Fault Detection System in Gas Lighter Manufacturing Process (라이터 제조공정의 불량 검출 시스템)

  • Choi, Sung-June;Park, Sang-Hyun;Lee, Kang-Hee;Shin, Youn-Soon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.132-135
    • /
    • 2021
  • 국내에서 유통되는 일회용 가스라이터 점유율의 약 절반은 국내 유일의 한 공장에서 생산하고 있다. 저렴한 외국산 가스라이터로부터 국내 사업을 보호하기 위해 품질 향상과 원가경쟁력 확보의 중요성이 매우 커진 것이 현실이다. 본 논문에서는 YOLOv4 머신러닝 객체인식 모델과 OpenCV 실시간 이미지 처리 오픈소스를 활용해 개발한 불량품 자동 검출 시스템을 제안한다. 대표적인 불량인 '액화가스 부피 불량품'을 검출하는 시스템을 개발하고 실험을 통해 그 정확성을 검증하였다. 제안한 시스템은 97%의 정확도로 상태를 분류하였으며, 이를 통해 100%의 불량을 검출할 수 있었다.

Implementation of AI-based Object Recognition Model for Improving Driving Safety of Electric Mobility Aids (객체 인식 모델과 지면 투영기법을 활용한 영상 내 다중 객체의 위치 보정 알고리즘 구현)

  • Dong-Seok Park;Sun-Gi Hong;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.2
    • /
    • pp.119-125
    • /
    • 2023
  • In this study, we photograph driving obstacle objects such as crosswalks, side spheres, manholes, braille blocks, partial ramps, temporary safety barriers, stairs, and inclined curb that hinder or cause inconvenience to the movement of the vulnerable using electric mobility aids. We develop an optimal AI model that classifies photographed objects and automatically recognizes them, and implement an algorithm that can efficiently determine obstacles in front of electric mobility aids. In order to enable object detection to be AI learning with high probability, the labeling form is labeled as a polygon form when building a dataset. It was developed using a Mask R-CNN model in Detectron2 framework that can detect objects labeled in the form of polygons. Image acquisition was conducted by dividing it into two groups: the general public and the transportation weak, and image information obtained in two areas of the test bed was secured. As for the parameter setting of the Mask R-CNN learning result, it was confirmed that the model learned with IMAGES_PER_BATCH: 2, BASE_LEARNING_RATE 0.001, MAX_ITERATION: 10,000 showed the highest performance at 68.532, so that the user can quickly and accurately recognize driving risks and obstacles.

Development of a Visual Simulation Tool for Object Behavior Chart based on LOTOS Formalism (객체행위챠트를 위한 LOTOS 정형기법 기반 시각적 시뮬레이션 도구의 개발)

  • Lee, Gwang-Yong;O, Yeong-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.595-610
    • /
    • 1999
  • This paper presents a visual simulation tool for verification and validation(V&V) of design implications of the Object Behavior Chart developed in accordance with the existing real-time object's behavior design method. This tool can simulates the dynamic interactions using the executable simulation machine, that is EFSM(Extended Finite State Machine) and can detect various logical and temporal errors in the visual object behavior charts before a concrete implementation is made. For this, a LOTOS prototype specification is automatically generated from the visual Object Behavior Chart, and is translated into an EFSM. This system is implemented in Visual C++ version 4.2 and currently runs on PC Windows 95 environment. For simulation purpose, LOTOS was chosen because of it's excellence in specifying communication protocols. Our research contributes to the support tools for seamlessly integrating methodology-based graphical models and formal-based simulation techniques, and also contributes to the automated V&V of the Visual Models.

Performance Comparison of the Optimizers in a Faster R-CNN Model for Object Detection of Metaphase Chromosomes (중기 염색체 객체 검출을 위한 Faster R-CNN 모델의 최적화기 성능 비교)

  • Jung, Wonseok;Lee, Byeong-Soo;Seo, Jeongwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1357-1363
    • /
    • 2019
  • In this paper, we compares the performance of the gredient descent optimizers of the Faster Region-based Convolutional Neural Network (R-CNN) model for the chromosome object detection in digital images composed of human metaphase chromosomes. In faster R-CNN, the gradient descent optimizer is used to minimize the objective function of the region proposal network (RPN) module and the classification score and bounding box regression blocks. The gradient descent optimizer. Through performance comparisons among these four gradient descent optimizers in our experiments, we found that the Adamax optimizer could achieve the mean average precision (mAP) of about 52% when considering faster R-CNN with a base network, VGG16. In case of faster R-CNN with a base network, ResNet50, the Adadelta optimizer could achieve the mAP of about 58%.

Object Detection Performance Analysis between On-GPU and On-Board Analysis for Military Domain Images

  • Du-Hwan Hur;Dae-Hyeon Park;Deok-Woong Kim;Jae-Yong Baek;Jun-Hyeong Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.157-164
    • /
    • 2024
  • In this paper, we propose a discussion that the feasibility of deploying a deep learning-based detector on the resource-limited board. Although many studies evaluate the detector on machines with high-performed GPUs, evaluation on the board with limited computation resources is still insufficient. Therefore, in this work, we implement the deep-learning detectors and deploy them on the compact board by parsing and optimizing a detector. To figure out the performance of deep learning based detectors on limited resources, we monitor the performance of several detectors with different H/W resource. On COCO detection datasets, we compare and analyze the evaluation results of detection model in On-Board and the detection model in On-GPU in terms of several metrics with mAP, power consumption, and execution speed (FPS). To demonstrate the effect of applying our detector for the military area, we evaluate them on our dataset consisting of thermal images considering the flight battle scenarios. As a results, we investigate the strength of deep learning-based on-board detector, and show that deep learning-based vision models can contribute in the flight battle scenarios.