• Title/Summary/Keyword: CCTV Videos

Search Result 40, Processing Time 0.028 seconds

A review of ground camera-based computer vision techniques for flood management

  • Sanghoon Jun;Hyewoon Jang;Seungjun Kim;Jong-Sub Lee;Donghwi Jung
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.425-443
    • /
    • 2024
  • Floods are among the most common natural hazards in urban areas. To mitigate the problems caused by flooding, unstructured data such as images and videos collected from closed circuit televisions (CCTVs) or unmanned aerial vehicles (UAVs) have been examined for flood management (FM). Many computer vision (CV) techniques have been widely adopted to analyze imagery data. Although some papers have reviewed recent CV approaches that utilize UAV images or remote sensing data, less effort has been devoted to studies that have focused on CCTV data. In addition, few studies have distinguished between the main research objectives of CV techniques (e.g., flood depth and flooded area) for a comprehensive understanding of the current status and trends of CV applications for each FM research topic. Thus, this paper provides a comprehensive review of the literature that proposes CV techniques for aspects of FM using ground camera (e.g., CCTV) data. Research topics are classified into four categories: flood depth, flood detection, flooded area, and surface water velocity. These application areas are subdivided into three types: urban, river and stream, and experimental. The adopted CV techniques are summarized for each research topic and application area. The primary goal of this review is to provide guidance for researchers who plan to design a CV model for specific purposes such as flood-depth estimation. Researchers should be able to draw on this review to construct an appropriate CV model for any FM purpose.

Violent Behavior Detection using Motion Analysis in Surveillance Video (감시 영상에서 움직임 정보 분석을 통한 폭력행위 검출)

  • Kang, Joohyung;Kwak, Sooyeong
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.430-439
    • /
    • 2015
  • The demand of violence detection techniques using a video analysis to help prevent crimes is increasing recently. Many researchers have studied vision based behavior recognition but, violent behavior analysis techniques usually focus on violent scenes in television and movie content. Many methods previously published usually used both a color(e.g., skin and blood) and motion information for detecting violent scenes because violences usually involve blood scenes in movies. However, color information (e.g., blood scenes) may not be useful cues for violence detection in surveillance videos, because they are rarely taken in real world situations. In this paper, we propose a method of violent behavior detection in surveillance videos using motion vectors such as flow vector magnitudes and changes in direction except the color information. In order to evaluate the proposed algorithm, we test both USI dataset and various real world surveillance videos from YouTube.

A Basic Study on the Instance Segmentation with Surveillance Cameras at Construction Sties using Deep Learning based Computer Vision (건설 현장 CCTV 영상에서 딥러닝을 이용한 사물 인식 기초 연구)

  • Kang, Kyung-Su;Cho, Young-Woon;Ryu, Han-Guk
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2020.11a
    • /
    • pp.55-56
    • /
    • 2020
  • The construction industry has the highest occupational fatality and injury rates related to accidents of any industry. Accordingly, safety managers closely monitor to prevent accidents in real-time by installing surveillance cameras at construction sites. However, due to human cognitive ability limitations, it is impossible to monitor many videos simultaneously, and the fatigue of the person monitoring surveillance cameras is also very high. Thus, to help safety managers monitor work and reduce the occupational accident rate, a study on object recognition in construction sites was conducted through surveillance cameras. In this study, we applied to the instance segmentation to identify the classification and location of objects and extract the size and shape of objects in construction sites. This research considers ways in which deep learning-based computer vision technology can be applied to safety management on a construction site.

  • PDF

Design and Implementation of IP Video Wall System for Large-scale Video Monitoring in Smart City Environments (스마트 시티 환경에서 대규모 영상 모니터링을 위한 IP 비디오 월 시스템의 설계 및 구현)

  • Yang, Sun-Jin;Park, Jae-Pyo;Yang, Seung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.7-13
    • /
    • 2019
  • Unlike a typical video wall system, video wall systems used for integrated monitoring in smart city environments should be able to display various videos, images, and texts simultaneously. In this paper, we propose an Internet Protocol (IP)-based video wall system that has no limit on the number of videos that can be monitored simultaneously, and that can arrange the monitor screen layout without restrictions. The proposed system is composed of multiple display servers, a wall controller, and video source providers, and they communicate with each other through an IP network. Since the display server receives and decodes the video stream directly from the video source devices, and displays it on the attached monitor screens, more videos can be simultaneously displayed on the entire video wall. When one video is displayed over several screens attached to multiple display servers, only one display server receives the video stream and transmits it to the other display servers by using IP multicast communications, thereby reducing the network load and synchronizing the video frames. Experiments show that as the number of videos increases, a system consisting of more display servers shows better decoding and rendering performance, and there is no performance degradation, even if the display server continues to be expanded.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

A Novel Vehicle Counting Method using Accumulated Movement Analysis (누적 이동량 분석을 통한 영상 기반 차량 통행량 측정 방법)

  • Lim, Seokjae;Jung, Hyeonseok;Kim, Wonjun;Lee, Ryong;Park, Minwoo;Lee, Sang-Hwan
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.83-93
    • /
    • 2020
  • With the rapid increase of vehicles, various traffic problems, e.g., car crashes, traffic congestions, etc, frequently occur in the road environment of the urban area. To overcome such traffic problems, intelligent transportation systems have been developed with a traffic flow analysis. The traffic flow, which can be estimated by the vehicle counting scheme, plays an important role to manage and control the urban traffic. In this paper, we propose a novel vehicle counting method based on predicted centers of each lane. Specifically, the centers of each lane are detected by using the accumulated movement of vehicles and its filtered responses. The number of vehicles, which pass through extracted centers, is counted by checking the closest trajectories of the corresponding vehicles. Various experimental results on road CCTV videos demonstrate that the proposed method is effective for vehicle counting.

Crowd Behavior Detection using Convolutional Neural Network (컨볼루션 뉴럴 네트워크를 이용한 군중 행동 감지)

  • Ullah, Waseem;Ullah, Fath U Min;Baik, Sung Wook;Lee, Mi Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • The automatic monitoring and detection of crowd behavior in the surveillance videos has obtained significant attention in the field of computer vision due to its vast applications such as security, safety and protection of assets etc. Also, the field of crowd analysis is growing upwards in the research community. For this purpose, it is very necessary to detect and analyze the crowd behavior. In this paper, we proposed a deep learning-based method which detects abnormal activities in surveillance cameras installed in a smart city. A fine-tuned VGG-16 model is trained on publicly available benchmark crowd dataset and is tested on real-time streaming. The CCTV camera captures the video stream, when abnormal activity is detected, an alert is generated and is sent to the nearest police station to take immediate action before further loss. We experimentally have proven that the proposed method outperforms over the existing state-of-the-art techniques.

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

Intelligent Abnormal Event Detection Algorithm for Single Households at Home via Daily Audio and Vision Patterns (지능형 오디오 및 비전 패턴 기반 1인 가구 이상 징후 탐지 알고리즘)

  • Jung, Juho;Ahn, Junho
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.77-86
    • /
    • 2019
  • As the number of single-person households increases, it is not easy to ask for help alone if a single-person household is severely injured in the home. This paper detects abnormal event when members of a single household in the home are seriously injured. It proposes an vision detection algorithm that analyzes and recognizes patterns through videos that are collected based on home CCTV. And proposes audio detection algorithms that analyze and recognize patterns of sound that occur in households based on Smartphones. If only each algorithm is used, shortcomings exist and it is difficult to detect situations such as serious injuries in a wide area. So I propose a fusion method that effectively combines the two algorithms. The performance of the detection algorithm and the precise detection performance of the proposed fusion method were evaluated, respectively.

2-Stage Detection and Classification Network for Kiosk User Analysis (디스플레이형 자판기 사용자 분석을 위한 이중 단계 검출 및 분류 망)

  • Seo, Ji-Won;Kim, Mi-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.668-674
    • /
    • 2022
  • Machine learning techniques using visual data have high usability in fields of industry and service such as scene recognition, fault detection, security and user analysis. Among these, user analysis through the videos from CCTV is one of the practical way of using vision data. Also, many studies about lightweight artificial neural network have been published to increase high usability for mobile and embedded environment so far. In this study, we propose the network combining the object detection and classification for mobile graphic processing unit. This network detects pedestrian and face, classifies age and gender from detected face. Proposed network is constructed based on MobileNet, YOLOv2 and skip connection. Both detection and classification models are trained individually and combined as 2-stage structure. Also, attention mechanism is used to improve detection and classification ability. Nvidia Jetson Nano is used to run and evaluate the proposed system.