• Title/Summary/Keyword: optical surveillance

Search Result 89, Processing Time 0.028 seconds

Estimation of Crowd Density in Public Areas Based on Neural Network

  • Kim, Gyujin;An, Taeki;Kim, Moonhyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.9
    • /
    • pp.2170-2190
    • /
    • 2012
  • There are nowadays strong demands for intelligent surveillance systems, which can infer or understand more complex behavior. The application of crowd density estimation methods could lead to a better understanding of crowd behavior, improved design of the built environment, and increased pedestrian safety. In this paper, we propose a new crowd density estimation method, which aims at estimating not only a moving crowd, but also a stationary crowd, using images captured from surveillance cameras situated in various public locations. The crowd density of the moving people is measured, based on the moving area during a specified time period. The moving area is defined as the area where the magnitude of the accumulated optical flow exceeds a predefined threshold. In contrast, the stationary crowd density is estimated from the coarseness of textures, under the assumption that each person can be regarded as a textural unit. A multilayer neural network is designed, to classify crowd density levels into 5 classes. Finally, the proposed method is experimented with PETS 2009 and the platform of Gangnam subway station image sequences.

A Real-Time Surveillance System for Vaccine Cold Chain Based o n Internet of Things Technology

  • Shao-jun Jiang;Zhi-lai Zhang;Wen-yan Song
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.394-406
    • /
    • 2023
  • In this study, a real-time surveillance system using Internet of Things technology is proposed for vaccine cold chains. This system fully visualizes vaccine transport and storage. It comprises a 4G gateway module, lowpower and low-cost wireless temperature and humidity collection module (WTHCM), cloud service software platform, and phone app. The WTHCM is installed in freezers or truck-mounted cold chain cabinets to collect the temperature and humidity information of the vaccine storage environment. It then transmits the collected data to a gateway module in the radiofrequency_physical layer (RF_PHY). The RF_PHY is an interface for calling the bottom 2.4-GHz transceiver, which can realize a more flexible communication mode. The gateway module can simultaneously receive data from multiple acquisition terminals, process the received data depending on the protocol, and transmit the collated data to the cloud server platform via 4G or Wi-Fi. The cloud server platform primarily provides data storage, chart views, short-message warnings, and other functions. The phone app is designed to help users view and print temperature and humidity data concerning the transportation and storage of vaccines anytime and anywhere. Thus, this system provides a new vaccine management model for ensuring the safety and reliability of vaccines to a greater extent.

Real Time Object Tracking Method using Multiple Cameras (다중 카메라를 이용한 실시간 객체 추적 방법)

  • Jang, In-Tae;Kim, Dong-Woo;Song, Young-Jun;Kwon, Hyeok-Bong;Ahn, Jae-Hyeong
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.4
    • /
    • pp.51-59
    • /
    • 2012
  • Recently, the study about object tracking using image processing has been active in the field of security and surveillance. Existing security and surveillance systems using multiple cameras have been operating independently. Thus, the chase was difficult when the tracking object move to other monitored areas. In this paper, we propose the way to change the control of camera automatically following the moving direction of objects in multiple cameras. The proposed method detects the object and tracks the object using color information and direction information of object. The color information obtains using the hue and the direction information obtains using the optical flow. At this time, the optical flow is detected for the entire image area of an object that is not applied only to reduce the computational complexity makes it possible to track in real time. In addition, it can be solved to inconvenience of security surveillance system to use existing camera by tracking an object automatically.

Tiny Drone Tracking with a Moving Camera (동적 카메라 환경에서의 소형 드론 추적 방법)

  • Son, Sohee;Jeon, Jinwoo;Lee, Injae;Cha, Jihun;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.802-812
    • /
    • 2019
  • With the rapid development in the field of unmanned aerial vehicles(UAVs) and drones, higher request to development of a surveillance system for a drone is putting forward. Since surveillance systems with fixed cameras have a limited range, a development of surveillance systems with a moving camera applicable to PTZ(Pan-Tilt-Zoom) cameras is required. Selecting the features for object plays a critical role in tracking, and the object has to be represented by their shapes or appearances. Considering these conditions, in this paper, an object tracking method with optical flow is introduced to track a tiny drone with a moving camera. In addition, a tracking method combined with kalman filter is proposed to track continuously even when tracking is failed. Experiments are tested on sequences which have a target from the minimal 12 pixels to the maximal 56337 pixels, the proposed method achieves average precision of 175% improvement. Also, experimental results show the proposed method tracks a target which has a size of 12pixels.

Detection using Optical Flow and EMD Algorithm and Tracking using Kalman Filter of Moving Objects (이동물체들의 Optical flow와 EMD 알고리즘을 이용한 식별과 Kalman 필터를 이용한 추적)

  • Lee, Jung Sik;Joo, Yung Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.7
    • /
    • pp.1047-1055
    • /
    • 2015
  • We proposes a method for improving the identification and tracking of the moving objects in intelligent video surveillance system. The proposed method consists of 3 parts: object detection, object recognition, and object tracking. First of all, we use a GMM(Gaussian Mixture Model) to eliminate the background, and extract the moving object. Next, we propose a labeling technique forrecognition of the moving object. and the method for identifying the recognized object by using the optical flow and EMD algorithm. Lastly, we proposes method to track the location of the identified moving object regions by using location information of moving objects and Kalman filter. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

High-sensitivity NIR Sensing with Stacked Photodiode Architecture

  • Hyunjoon Sung;Yunkyung Kim
    • Current Optics and Photonics
    • /
    • v.7 no.2
    • /
    • pp.200-206
    • /
    • 2023
  • Near-infrared (NIR) sensing technology using CMOS image sensors is used in many applications, including automobiles, biological inspection, surveillance, and mobile devices. An intuitive way to improve NIR sensitivity is to thicken the light absorption layer (silicon). However, thickened silicon lacks NIR sensitivity and has other disadvantages, such as diminished optical performance (e.g. crosstalk) and difficulty in processing. In this paper, a pixel structure for NIR sensing using a stacked CMOS image sensor is introduced. There are two photodetection layers, a conventional layer and a bottom photodiode, in the stacked CMOS image sensor. The bottom photodiode is used as the NIR absorption layer. Therefore, the suggested pixel structure does not change the thickness of the conventional photodiode. To verify the suggested pixel structure, sensitivity was simulated using an optical simulator. As a result, the sensitivity was improved by a maximum of 130% and 160% at wavelengths of 850 nm and 940 nm, respectively, with a pixel size of 1.2 ㎛. Therefore, the proposed pixel structure is useful for NIR sensing without thickening the silicon.

Design of L-Band-Phased Array Radar System for Space Situational Awareness (우주감시를 위한 L-Band 위상배열레이다 시스템 설계)

  • Lee, Jonghyun;Choi, Eun Jung;Moon, Hyun-Wook;Park, Joontae;Cho, Sungki;Park, Jang Hyun;Jo, Jung Hyun
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.3
    • /
    • pp.214-224
    • /
    • 2018
  • Continuous space development increases the occurrence probability of space hazards such as collapse of a satellite and collision between a satellite and space debris. In Korea, a space surveillance network with optical system has been developed; however, the radar technology for an independent space surveillance needs to be secured. Herein, an L-band phased array radar system for the detection and tracking of space objects is proposed to provide a number of services including collision avoidance and the prediction of re-entry events. With the mission analysis of space surveillance and the case analysis of foreign advanced radar systems, the radar parameters are defined and designed. The proposed radar system is able to detect a debris having a diameter of 10 cm at a maximum distance of 1,576 km. In addition, we confirmed the possibility of using the space surveillance mission for domestic satellites through the analysis of the detection area.

Rainfall Recognition from Road Surveillance Videos Using TSN (TSN을 이용한 도로 감시 카메라 영상의 강우량 인식 방법)

  • Li, Zhun;Hyeon, Jonghwan;Choi, Ho-Jin
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.5
    • /
    • pp.735-747
    • /
    • 2018
  • Rainfall depth is an important meteorological information. Generally, high spatial resolution rainfall data such as road-level rainfall data are more beneficial. However, it is expensive to set up sufficient Automatic Weather Systems to get the road-level rainfall data. In this paper, we propose to use deep learning to recognize rainfall depth from road surveillance videos. To achieve this goal, we collect a new video dataset and propose a procedure to calculate refined rainfall depth from the original meteorological data. We also propose to utilize the differential frame as well as the optical flow image for better recognition of rainfall depth. Under the Temporal Segment Networks framework, the experimental results show that the combination of the video frame and the differential frame is a superior solution for the rainfall depth recognition. The final model is able to achieve high performance in the single-location low sensitivity classification task and reasonable accuracy in the higher sensitivity classification task for both the single-location and the multi-location case.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

  • Jun Ho Lee;Seok Gi Han;Do Hee Kim;Seokyoung Ju;Tae Kyung Lee;Chang Hoon Song;Myoungjoo Kang;Seonghui Kim;Seohyun Seong
    • Current Optics and Photonics
    • /
    • v.7 no.5
    • /
    • pp.545-556
    • /
    • 2023
  • This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

Target image detection and servo motor control for automatic surveillance tracking (자동 감시 추적을 위한 표적영상 검출 및 서보모터 제어)

  • Shin, Heung Yeoul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.2
    • /
    • pp.119-127
    • /
    • 2010
  • In this paper, we propose a new automatic surveillance tracking system that can extract the target from the complex background and foreground noises by using the image-based SAD algorithm and control the servo motor of cameras by using kanatani algorithm. From the experimental results the proposed stereo tracking system is found to track the target adaptively under the circumstance of complex and changing background noises and the possibility of real-time implementation of the proposed system by using the optical system is also suggested.