• Title/Summary/Keyword: Surveillance Resolution

Search Result 101, Processing Time 0.025 seconds

A 3 ~ 5 GHz CMOS UWB Radar Chip for Surveillance and Biometric Applications

  • Lee, Seung-Jun;Ha, Jong-Ok;Jung, Seung-Hwan;Yoo, Hyun-Jin;Chun, Young-Hoon;Kim, Wan-Sik;Lee, Noh-Bok;Eo, Yun-Seong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.11 no.4
    • /
    • pp.238-246
    • /
    • 2011
  • A 3-5 GHz UWB radar chip in 0.13 ${\mu}m$ CMOS process is presented in this paper. The UWB radar transceiver for surveillance and biometric applications adopts the equivalent time sampling architecture and 4-channel time interleaved samplers to relax the impractical sampling frequency and enhance the overall scanning time. The RF front end (RFFE) includes the wideband LNA and 4-way RF power splitter, and the analog signal processing part consists of the high speed track & hold (T&H) / sample & hold (S&H) and integrator. The interleaved timing clocks are generated using a delay locked loop. The UWB transmitter employs the digitally synthesized topology. The measured NF of RFFE is 9.5 dB in 3-5 GHz. And DLL timing resolution is 50 ps. The measured spectrum of UWB transmitter shows the center frequency within 3-5 GHz satisfying the FCC spectrum mask. The power consumption of receiver and transmitter are 106.5 mW and 57 mW at 1.5 V supply, respectively.

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Violent crowd flow detection from surveillance cameras using deep transfer learning-gated recurrent unit

  • Elly Matul Imah;Riskyana Dewi Intan Puspitasari
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.671-682
    • /
    • 2024
  • Violence can be committed anywhere, even in crowded places. It is hence necessary to monitor human activities for public safety. Surveillance cameras can monitor surrounding activities but require human assistance to continuously monitor every incident. Automatic violence detection is needed for early warning and fast response. However, such automation is still challenging because of low video resolution and blind spots. This paper uses ResNet50v2 and the gated recurrent unit (GRU) algorithm to detect violence in the Movies, Hockey, and Crowd video datasets. Spatial features were extracted from each frame sequence of the video using a pretrained model from ResNet50V2, which was then classified using the optimal trained model on the GRU architecture. The experimental results were then compared with wavelet feature extraction methods and classification models, such as the convolutional neural network and long short-term memory. The results show that the proposed combination of ResNet50V2 and GRU is robust and delivers the best performance in terms of accuracy, recall, precision, and F1-score. The use of ResNet50V2 for feature extraction can improve model performance.

TSSN: A Deep Learning Architecture for Rainfall Depth Recognition from Surveillance Videos (TSSN: 감시 영상의 강우량 인식을 위한 심층 신경망 구조)

  • Li, Zhun;Hyeon, Jonghwan;Choi, Ho-Jin
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.87-97
    • /
    • 2018
  • Rainfall depth is an important meteorological information. Generally, high spatial resolution rainfall data such as road-level rainfall data are more beneficial. However, it is expensive to set up sufficient Automatic Weather Systems to get the road-level rainfall data. In this paper, we proposed to use deep learning to recognize rainfall depth from road surveillance videos. To achieve this goal, we collected two new video datasets, and proposed a new deep learning architecture named Temporal and Spatial Segment Networks (TSSN) for rainfall depth recognition. Under TSSN, the experimental results show that the combination of the video frame and the differential frame is a superior solution for the rainfall depth recognition. Also, the proposed TSSN architecture outperforms other architectures implemented in this paper.

Implementation of Intelligent Image Surveillance System based Context (컨텍스트 기반의 지능형 영상 감시 시스템 구현에 관한 연구)

  • Moon, Sung-Ryong;Shin, Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • This paper is a study on implementation of intelligent image surveillance system using context information and supplements temporal-spatial constraint, the weak point in which it is hard to process it in real time. In this paper, we propose scene analysis algorithm which can be processed in real time in various environments at low resolution video(320*240) comprised of 30 frames per second. The proposed algorithm gets rid of background and meaningless frame among continuous frames. And, this paper uses wavelet transform and edge histogram to detect shot boundary. Next, representative key-frame in shot boundary is selected by key-frame selection parameter and edge histogram, mathematical morphology are used to detect only motion region. We define each four basic contexts in accordance with angles of feature points by applying vertical and horizontal ratio for the motion region of detected object. These are standing, laying, seating and walking. Finally, we carry out scene analysis by defining simple context model composed with general context and emergency context through estimating each context's connection status and configure a system in order to check real time processing possibility. The proposed system shows the performance of 92.5% in terms of recognition rate for a video of low resolution and processing speed is 0.74 second in average per frame, so that we can check real time processing is possible.

Motion-based ROI Extraction with a Standard Angle-of-View from High Resolution Fisheye Image (고해상도 어안렌즈 영상에서 움직임기반의 표준 화각 ROI 검출기법)

  • Ryu, Ar-Chim;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.395-401
    • /
    • 2020
  • In this paper, a motion-based ROI extraction algorithm from a high resolution fisheye image is proposed for multi-view monitoring systems. Lately fisheye cameras are widely used because of the wide angle-of-view and they basically provide a lens correction functionality as well as various viewing modes. However, since the distortion-free angle of conventional algorithms is quite narrow due to the severe distortion ratio, there are lots of unintentional dead areas and they require much computation time in finding undistorted coordinates. Thus, the proposed algorithm adopts an image decimation and a motion detection methods, that can extract the undistorted ROI image with a standard angle-of-view for the fast and intelligent surveillance system. In addition, a mesh-type ROI is presented to reduce the lens correction time, so that this independent ROI scheme can parallelize and maximize the processor's utilization.

Constrained adversarial loss for generative adversarial network-based faithful image restoration

  • Kim, Dong-Wook;Chung, Jae-Ryun;Kim, Jongho;Lee, Dae Yeol;Jeong, Se Yoon;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.415-425
    • /
    • 2019
  • Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super-resolution, and compression artifact reduction. By fully exploiting its characteristics, state-of-the-art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN-training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non-negligible degradation of the objective image quality, including peak signal-to-noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression-artifact reduction and super-resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

  • Jun Ho Lee;Seok Gi Han;Do Hee Kim;Seokyoung Ju;Tae Kyung Lee;Chang Hoon Song;Myoungjoo Kang;Seonghui Kim;Seohyun Seong
    • Current Optics and Photonics
    • /
    • v.7 no.5
    • /
    • pp.545-556
    • /
    • 2023
  • This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

Operational Ship Monitoring Based on Multi-platforms (Satellite, UAV, HF Radar, AIS) (다중 플랫폼(위성, 무인기, AIS, HF 레이더)에 기반한 시나리오별 선박탐지 모니터링)

  • Kim, Sang-Wan;Kim, Donghan;Lee, Yoon-Kyung;Lee, Impyeong;Lee, Sangho;Kim, Junghoon;Kim, Keunyong;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.379-399
    • /
    • 2020
  • The detection of illegal ship is one of the key factors in building a marine surveillance system. Effective marine surveillance requires the means for continuous monitoring over a wide area. In this study, the possibility of ship detection monitoring based on satellite SAR, HF radar, UAV and AIS integration was investigated. Considering the characteristics of time and spatial resolution for each platform, the ship monitoring scenario consisted of a regular surveillance system using HFR data and AIS data, and an event monitoring system using satellites and UAVs. The regular surveillance system still has limitations in detecting a small ship and accuracy due to the low spatial resolution of HF radar data. However, the event monitoring system using satellite SAR data effectively detects illegal ships using AIS data, and the ship speed and heading direction estimated from SAR images or ship tracking information using HF radar data can be used as the main information for the transition to UAV monitoring. For the validation of monitoring scenario, a comprehensive field experiment was conducted from June 25 to June 26, 2019, at the west side of Hongwon Port in Seocheon. KOMPSAT-5 SAR images, UAV data, HF radar data and AIS data were successfully collected and analyzed by applying each developed algorithm. The developed system will be the basis for the regular and event ship monitoring scenarios as well as the visualization of data and analysis results collected from multiple platforms.

Development of a Portable RPV for Short-range Operations (근거리 원격탐색용 휴대용 무인기의 구성에 관한 연구)

  • 박주원
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.4 no.2
    • /
    • pp.227-232
    • /
    • 2001
  • IPresented is a small and handy remotely piloted vehicle(RPV) that can be used for military and non-military surveillance operations. The RPV is equipped with an on-board high resolution color camera to transmit the analog video images and on-board electronics to provide real-time flight information to the pilot, thereby enabling him/her to remotely pilot within the range of 5 km radius. This paper describes the RPV system including its design, manufacturing and flight test results which manifest the stability of on-board mission and flight equipment as well as the remote piloting capability. A future plan for necessary improvements identified from the flight tests are also discussed.

  • PDF