• 제목/요약/키워드: Video Object Detection

검색결과 352건 처리시간 0.029초

Background Subtraction in Dynamic Environment based on Modified Adaptive GMM with TTD for Moving Object Detection

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권1호
    • /
    • pp.372-378
    • /
    • 2015
  • Background subtraction is the first processing stage in video surveillance. It is a general term for a process which aims to separate foreground objects from a background. The goal is to construct and maintain a statistical representation of the scene that the camera sees. The output of background subtraction will be an input to a higher-level process. Background subtraction under dynamic environment in the video sequences is one such complex task. It is an important research topic in image analysis and computer vision domains. This work deals background modeling based on modified adaptive Gaussian mixture model (GMM) with three temporal differencing (TTD) method in dynamic environment. The results of background subtraction on several sequences in various testing environments show that the proposed method is efficient and robust for the dynamic environment and achieves good accuracy.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

HSV 컬러 모델에서의 도플러 효과와 영상 차분 기반의 실시간 움직임 물체 검출 (Real Time Moving Object Detection Based on Frame Difference and Doppler Effects in HSV color model)

  • 누완;김원호
    • 한국위성정보통신학회논문지
    • /
    • 제9권4호
    • /
    • pp.77-81
    • /
    • 2014
  • 본 논문은 영상에서 실시간으로 움직임 물체와 물체의 위치를 검출하는 방법을 제안한다. 첫째로 영상으로부터 2개의 연속된 프레임 차분을 통해 움직이는 물체를 추출하는 방법을 제안한다. 만약 두 프레임이 캡쳐되는 사이의 간격이 길다면, 실제 움직이는 물체의 꼬리 같은 거짓 움직임 물체를 생성한다. 두번째로 본 논문은 도플러 효과와 HSV 색상 모델을 사용하여 이 문제들을 해결하는 방법을 제안한다. 마지막으로 물체의 분할과 위치 설정은 상기의 단계에서 얻은 결과가 조합되어 완료된다. 제안된 방법은 99.2%의 검출율을 갖고, 과거에 제안된 다른 비슷한 방법들 보다는 비교적 빠른 속도를 갖는다. 알고리즘의 복잡성은 시스템의 속도에 직접적인 영향을 끼치기 때문에, 제안된 방법은 낮은 복잡성을 가져 실시간 움직임 검출을 위해 사용 될 수 있다.

안개 제거에 의한 객체 검출 성능 향상 방법 (A Framework for Object Detection by Haze Removal)

  • 김상균;최경호;박순영
    • 전자공학회논문지
    • /
    • 제51권5호
    • /
    • pp.168-176
    • /
    • 2014
  • 영상 시퀀스로부터 움직이는 객체의 검출은 비디오 감시, 교통 모니터링 및 분석, 사람 검출 및 추적 등에서 가장 기본적이며 중요한 분야이다. 안개와 같은 환경적 요인에 의하여 화질이 저하된 영상 속에서 움직이는 객체를 검출하는 일은 매우 어렵다. 특히, 안개는 주변 물체의 색상을 모두 비슷하게 만들고 채도를 떨어뜨려 배경으로부터 객체를 구별하기 힘들게 만든다. 이런 이유로 안개 영상 속에서 객체 검출 성능은 매우 낮으며 신뢰할 수 없는 결과를 나타내고 있다. 본 논문은 안개와 같은 환경적 요인을 제거하고 객체의 검출 성능을 높이기 위한 방법으로 안개 지수를 기반으로 안개 유무를 판단하고, Dark Channel Prior을 이용하여 안개 영상의 전달량을 추정하고 안개가 제거된 영상으로 복원하였으며 가우시안 혼합 모델을 이용한 배경 차분 방법을 이용하여 객체를 검출하였다. 그리고 제안된 방법의 성능을 비교하기 위해 안개 제거 전과 후의 영상에 대한 Recall 과 Precision을 측정하여 안개 제거에 따른 성능 향상 정도를 수치화하여 비교하였다. 결과적으로 안개 제거 후 영상의 가시성이 매우 향상되었으며 객체 검출 성능이 매우 향상됨을 알 수 있었다.

Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

  • Kim, Bae-Sung;Woo, Yun-Tae;Yu, Yung-Ho;Hwang, Hun-Gyu
    • 한국해양공학회지
    • /
    • 제35권1호
    • /
    • pp.91-97
    • /
    • 2021
  • Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

병렬처리 그래픽 프로세서와 범용 프로세서에서의 보행자 검출 처리 속도 비교 (Comparison Speed of Pedestrian Detection with Parallel Processing Graphic Processor and General Purpose Processor)

  • 박장식
    • 한국전자통신학회논문지
    • /
    • 제10권2호
    • /
    • pp.239-246
    • /
    • 2015
  • 영상기반 객체 검출은 지능형 CCTV 시스템을 구현하는데 있어 기본적인 기술이다. 객체 검출을 위하여 다양한 특징점과 알고리즘이 개발되었으나, 성능에 비례하여 계산량이 많다. 본 논문에서는 GPU와 CPU를 활용하여 객체 검출 알고리즘의 성능을 비교하였다. 일반적으로 보행자 검출에 널리 사용되고 있는 Adaboost 알고리즘과 SVM 알고리즘을 각각 CPU와 GPU에 맞도록 구현하고 동일 영상에 대하여 검출 처리 속도를 비교하였다. Adaboost 알고리즘과 SVM 알고리즘에 대하여 처리 속도를 비교한 결과 GPU가 CPU에 비하여 약 4 배 정도 빠른 처리를 할 수 있음을 확인하였다.

Object Motion Analysis and Interpretation in Video

  • Song, Dan;Cho, Mi-Young;Kim, Pan-Koo
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2004년도 가을 학술발표논문집 Vol.31 No.2 (2)
    • /
    • pp.694-696
    • /
    • 2004
  • With the more sophisticated abilities development of video, object motion analysis and interpretation has become the fundamental task for the computer vision understanding. For that understanding, firstly, we seek a sum of absolute difference algorithm to apply to the motion detection, which was based on the scene. Then we will focus on the moving objects representation in the scene using spatio-temporal relations. The video can be explained comprehensively from the both aspects : moving objects relations and video events intervals.

  • PDF

야간 적외선 카메라를 이용한 객체 검출 및 추적 (Object Detection and Tracking with Infrared Videos at Night-time)

  • 최범준;박장식;송종관;윤병우
    • 한국전자통신학회논문지
    • /
    • 제10권2호
    • /
    • pp.183-188
    • /
    • 2015
  • 본 논문에서는 야간 CCTV 영상을 활용하여 보행자를 검출하고 추적하는 방법을 제안하고 추적 성능을 분석한다. 유사 Haar 특징을 이용하여 Adaboost 알고리즘으로 학습하고 종속분류기로 객체를 검출한다. 파티클 필터를 활용하여 검출된 보행자를 추적한다. 야간 CCTV영상에 대하여 파티클 필터의 객체 추적에 효율적인 파티클 수와 분포를 실험을 통하여 제시하였다. 골목길 등에서 취득한 야간 CCTV영상에 대하여 검출과 추적성능을 검증하였다.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

얼굴검출에 기반한 강인한 객체 추적 시스템 (Robust Object Tracking System Based on Face Detection)

  • 곽민석
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제6권1호
    • /
    • pp.9-14
    • /
    • 2017
  • 최근 컴퓨터 기술의 발전과 함께 임베디드 기기 또한 다양한 기능을 갖추기 시작했다. 본 연구에서는 최근 활발하게 진행되고 있는 영상센서를 사용한 임베디드 기기 등 자원이 적은 기기에서 효율적인 얼굴 추적 방식을 제안한다. 정확한 얼굴을 얻기 위하여 MB-LBP 특징을 사용한 얼굴 검출 방식을 사용했으며, 다음 영상에서 얼굴 객체 추적을 위하여 얼굴 검출시 얼굴 주변 영역(Region of Interest)을 지정하였다. 그리고 얼굴을 검출을 못하는 영상에서는 기존의 객체 추적 방식인 CAM-Shift를 사용해 객체를 추적해 객체 정보의 손실 없이 정보를 유지할 수 있도록 하였다. 본 연구는 기존 연구와의 비교를 통하여 객체 추적 시스템의 정확성과 빠른 성능을 확인하였다.