• Title/Summary/Keyword: video object

Search Result 1,056, Processing Time 0.032 seconds

Method for reducing computational amount in video object detection (비디오 Object Detection에서의 연산량 감소를 위한 방법)

  • KIM, Do-Young;Kang, In-Yeong;Kim, Yeonsu;Choi, Jin-Won;Park, Goo-man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.723-726
    • /
    • 2021
  • 현재 단일 이미지에서 Object Detection 성능은 매우 좋은 편이다. 하지만 동영상에서는 처리 속도가 너무 느리고 임베디드 시스템에서는 real-time이 힘든 상황이다. 연구 논문에서는 하이엔드 GPU에서 다른 기능 없이 YOLO만 구동했을 때 real-time이 가능하다고 하지만 실제 사용자들은 상대적으로 낮은 사양의 GPU를 사용하거나 CPU를 사용하기 때문에 일반적으로는 자연스러운 real-time을 하기가 힘들다. 본 논문에서는 이러한 제한점을 해결하고자 계산량이 많은 Object Detection model 사용을 줄이는 방안은 제시하였다. 현재 Video영상에서 Object Detection을 수행할 때 매 frame마다 YOLO모델을 구동하는 것에서 YOLO 사용을 줄임으로써 계산 효율을 높였다. 본 논문의 알고리즘은 카메라가 움직이거나 배경이 바뀌는 상황에서도 사용이 가능하다. 속도는 최소2배에서 ~10배이상까지 개선되었다.

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Development of an Integrated Traffic Object Detection Framework for Traffic Data Collection (교통 데이터 수집을 위한 객체 인식 통합 프레임워크 개발)

  • Yang, Inchul;Jeon, Woo Hoon;Lee, Joyoung;Park, Jihyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.191-201
    • /
    • 2019
  • A fast and accurate integrated traffic object detection framework was proposed and developed, harnessing a computer-vision based deep-learning approach performing automatic object detections, a multi object tracking technology, and video pre-processing tools. The proposed method is capable of detecting traffic object such as autos, buses, trucks and vans from video recordings taken under a various kinds of external conditions such as stability of video, weather conditions, video angles, and counting the objects by tracking them on a real-time basis. By creating plausible experimental scenarios dealing with various conditions that likely affect video quality, it is discovered that the proposed method achieves outstanding performances except for the cases of rain and snow, thereby resulting in 98% ~ 100% of accuracy.

Abnormal Object Detection-based Video Synopsis Framework in Multiview Video (다시점 영상에 대한 이상 물체 탐지 기반 영상 시놉시스 프레임워크)

  • Ingle, Palash Yuvraj;Yu, Jin-Yong;Kim, Young-Gab
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.213-216
    • /
    • 2022
  • There has been an increase in video surveillance for public safety and security, which increases the video data, leading to analysis, and storage issues. Furthermore, most surveillance videos contain an empty frame of hours of video footage; thus, extracting useful information is crucial. The prominent framework used in surveillance for efficient storage and analysis is video synopsis. However, the existing video synopsis procedure is not applicable for creating an abnormal object-based synopsis. Therefore, we proposed a lightweight synopsis methodology that initially detects and extracts abnormal foreground objects and their respective backgrounds, which is stitched to construct a synopsis.

Robust Real-time Detection of Abandoned Objects using a Dual Background Model

  • Park, Hyeseung;Park, Seungchul;Joo, Youngbok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.771-788
    • /
    • 2020
  • Detection of abandoned objects for smart video surveillance should be robust and accurate in various situations with low computational costs. This paper presents a new algorithm for abandoned object detection based on the dual background model. Through the template registration of a candidate stationary object and presence authentication methods presented in this paper, we can handle some complex cases such as occlusions, illumination changes, long-term abandonment, and owner's re-attendance as well as general detection of abandoned objects. The proposed algorithm also analyzes video frames at specific intervals rather than consecutive video frames to reduce the computational overhead. For performance evaluation, we experimented with the algorithm using the well-known PETS2006, ABODA datasets, and our video dataset in a live streaming environment, which shows that the proposed algorithm works well in various situations.

Video Segmentation Using New Combined Measure (새로운 결합척도를 이용한 동영상 분할)

  • 최재각;이시웅;남재열
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.51-62
    • /
    • 2003
  • A new video segmentation algorithm for segmentation-based video coding is proposed. The method uses a new criterion based on similarities in both motion and brightness. Brightness and motion information are incorporated in a single segmentation procedure. The actual segmentation is accomplished using a region-growing technique based on the watershed algorithm. In addition, a tracking technique is used in subsequent frames to achieve a coherent segmentation through time. Simulation results show that the proposed method is effective in determining object boundaries not easily found using the statistic criterion alone.

Required Video Analytics and Event Processing Scenario at Large Scale Urban Transit Surveillance System (도시철도 종합감시시스템에서 요구되는 객체인식 기능 및 시나리오)

  • Park, Kwang-Young;Park, Goo-Man
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.3
    • /
    • pp.63-69
    • /
    • 2012
  • In this paper, we introduced design of intelligent surveillance camera system and typical event processing scenario for urban transit. To analyze video, we studied events that frequently occur in surveillance camera system. Event processing scenario is designed for seven representative situations(designated area intrusion, object abandon, object removal in designated area, object tracking, loitering and congestion measurement) in urban transit. Our system is optimized for low hardware complexity, real time processing and scenario dependent solution.

Moving Objects Tracking Method using Spatial Projection in Intelligent Video Traffic Surveillance System (지능형 영상 교통 감시 시스템에서 공간 투영기법을 이용한 이동물체 추적 방법)

  • Hong, Kyung Taek;Shim, Jae Homg;Cho, Young Im
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.35-41
    • /
    • 2015
  • When a video surveillance system tracks a specific object, it is very important to get quickly the information of the object through fast image processing. Usually one camera surveillance system for tracking the object made results in various problems such like occlusion, image noise during the tracking process. It makes difficulties on image based moving object tracking. Therefore, to overcome the difficulties the multi video surveillance system which installed several camera within interested area and looking the same object from multi angles of view could be considered as a solution. If multi cameras are used for tracking object, it is capable of making a decision having high accuracy in more wide space. This paper proposes a method of recognizing and tracking a specific object like a car using the homography in which multi cameras are installed at the crossroad.

Robust Object Detection from Indoor Environmental Factors (다양한 실내 환경변수로부터 강인한 객체 검출)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.41-46
    • /
    • 2010
  • In this paper, we propose a detection method of reduced computational complexity aimed at separating the moving objects from the background in a generic video sequence. In generally, indoor environments, it is difficult to accurately detect the object because environmental factors, such as lighting changes, shadows, reflections on the floor. First, the background image to detect an object is created. If an object exists in video, on a previously created background images for similarity comparison between the current input image and to detect objects through several operations to generate a mixture image. Mixed-use video and video inputs to detect objects. To complement the objects detected through the labeling process to remove noise components and then apply the technique of morphology complements the object area. Environment variable such as, lighting changes and shadows, to the strength of the object is detected. In this paper, we proposed that environmental factors, such as lighting changes, shadows, reflections on the floor, including the system uses mixture images. Therefore, the existing system more effectively than the object region is detected.

The Study of automatic region segmentation method for Non-rigid Object Tracking (Non-rigid Object의 추적을 위한 자동화 영역 추출에 관한 연구)

  • 김경수;정철곤;김중규
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.183-186
    • /
    • 2001
  • This paper for the method that automatically extracts moving object of the video image is presented. In order to extract moving object, it is that velocity vectors correspond to each frame of the video image. Using the estimated velocity vector, the position of the object are determined. the value of the coordination of the object is initialized to the seed, and in the image plane, the moving object is automatically segmented by the region growing method and tracked by the range of intensity and information about Position. As the result of an application in sequential images, it is available to extract a moving object.

  • PDF