• Title/Summary/Keyword: Moving CCTV

Search Result 65, Processing Time 0.022 seconds

Empirical Study of the PLSP (Priority Land and Signal Preemption for Emergency Vehicles (긴급차량의 우선차로 및 우선신호 도입효과 -청주시를 대상으로-)

  • Lee, Jun;Ham, Seung Hee;Lee, Sang Jo
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.650-657
    • /
    • 2020
  • Purpose: In this study, the effectiveness of pilot project of PLSP (Priority Lane and Signal Preference) system, which was operated in Cheongju City, was analyzed. Method: The priority signal was operated by a police officer switching to a blue signal when approaching a fire truck through CCTV, and the priority lane of emergency vehicles was displayed on the road to enable preferential traffic. VISSIM simulation analysis was performed for the 1.2km section (3.8km) of the pilot project section and vehicle data was analyzed for some of the test operation sections. Result: Simulation analysis shows that the moving speed of the emergency vehicle can be increased by 42 km/h with the introduction of PLSP, which can be increased by approximately twice the speed. Travel time was reduced by about 3 minutes, and considerable improvements of 69% compared to cities that are not operating was analyzed. The pilot operation of Cheongju City showed a time-shortening effect of about two minutes on average, with the average time reaching 4 minutes and 14 seconds in the first period and the average time reaching 5 minutes and 40 seconds in the second period. Conclusion: The system has been shown to be effective in minimizing time-to-site arrival of emergency vehicles.

Vision-based Low-cost Walking Spatial Recognition Algorithm for the Safety of Blind People (시각장애인 안전을 위한 영상 기반 저비용 보행 공간 인지 알고리즘)

  • Sunghyun Kang;Sehun Lee;Junho Ahn
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.81-89
    • /
    • 2023
  • In modern society, blind people face difficulties in navigating common environments such as sidewalks, elevators, and crosswalks. Research has been conducted to alleviate these inconveniences for the visually impaired through the use of visual and audio aids. However, such research often encounters limitations when it comes to practical implementation due to the high cost of wearable devices, high-performance CCTV systems, and voice sensors. In this paper, we propose an artificial intelligence fusion algorithm that utilizes low-cost video sensors integrated into smartphones to help blind people safely navigate their surroundings during walking. The proposed algorithm combines motion capture and object detection algorithms to detect moving people and various obstacles encountered during walking. We employed the MediaPipe library for motion capture to model and detect surrounding pedestrians during motion. Additionally, we used object detection algorithms to model and detect various obstacles that can occur during walking on sidewalks. Through experimentation, we validated the performance of the artificial intelligence fusion algorithm, achieving accuracy of 0.92, precision of 0.91, recall of 0.99, and an F1 score of 0.95. This research can assist blind people in navigating through obstacles such as bollards, shared scooters, and vehicles encountered during walking, thereby enhancing their mobility and safety.

A Method of Pedestrian Flow Speed Estimation Adaptive to Viewpoint Changes (시점변화에 적응적인 보행자 유동 속도 측정)

  • Lee, Gwang-Gook;Yoon, Ja-Young;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.409-418
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, a pixel-to-meter conversion factor is introduced which is calculated from camera parameters. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.08m/s. The proposed method also showed promising results for the real video.

Video-based Intelligent Unmanned Fire Surveillance System (영상기반 지능형 무인 화재감시 시스템)

  • Jeon, Hyoung-Seok;Yeom, Dong-Hae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.516-521
    • /
    • 2010
  • In this paper, we propose a video-based intelligent unmanned fire surveillance system using fuzzy color models. In general, to detect heat or smoke, a separate device is required for a fire surveillance system, this system, however, can be implemented by using widely used CCTV, which does not need separate devices and extra cost. The systems called video-based fire surveillance systems use mainly a method extracting smoke or flame from an input image only. The smoke is difficult to extract at night because of its gray-scale color, and the flame color depends on the temperature, the inflammable, the size of flame, etc, which makes it hard to extract the flame region from the input image. This paper deals with a intelligent fire surveillance system which is robust against the variation of the flame color, especially at night. The proposed system extracts the moving object from the input image, makes a decision whether the object is the flame or not by means of the color obtained by fuzzy color model and the shape obtained by histogram, and issues a fire alarm when the flame is spread. Finally, we verify the efficiency of the proposed system through the experiment of the controlled real fire.

Regional Projection Histogram Matching and Linear Regression based Video Stabilization for a Moving Vehicle (영역별 수직 투영 히스토그램 매칭 및 선형 회귀모델 기반의 차량 운행 영상의 안정화 기술 개발)

  • Heo, Yu-Jung;Choi, Min-Kook;Lee, Hyun-Gyu;Lee, Sang-Chul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.798-809
    • /
    • 2014
  • Video stabilization is performed to remove unexpected shaky and irregular motion from a video. It is often used as preprocessing for robust feature tracking and matching in video. Typical video stabilization algorithms are developed to compensate motion from surveillance video or outdoor recordings that are captured by a hand-help camera. However, since the vehicle video contains rapid change of motion and local features, typical video stabilization algorithms are hard to be applied as it is. In this paper, we propose a novel approach to compensate shaky and irregular motion in vehicle video using linear regression model and vertical projection histogram matching. Towards this goal, we perform vertical projection histogram matching at each sub region of an input frame, and then we generate linear regression model to extract vertical translation and rotation parameters with estimated regional vertical movement vector. Multiple binarization with sub-region analysis for generating the linear regression model is effective to typical recording environments where occur rapid change of motion and local features. We demonstrated the effectiveness of our approach on blackbox videos and showed that employing the linear regression model achieved robust estimation of motion parameters and generated stabilized video in full automatic manner.