• Title/Summary/Keyword: Video surveillance

Search Result 488, Processing Time 0.021 seconds

Real-Time Object Tracking Algorithm based on Pattern Classification in Surveillance Networks (서베일런스 네트워크에서 패턴인식 기반의 실시간 객체 추적 알고리즘)

  • Kang, Sung-Kwan;Chun, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.183-190
    • /
    • 2016
  • This paper proposes algorithm to reduce the computing time in a neural network that reduces transmission of data for tracking mobile objects in surveillance networks in terms of detection and communication load. Object Detection can be defined as follows : Given image sequence, which can forom a digitalized image, the goal of object detection is to determine whether or not there is any object in the image, and if present, returns its location, direction, size, and so on. But object in an given image is considerably difficult because location, size, light conditions, obstacle and so on change the overall appearance of objects, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact object detection which overcomes some restrictions by using neural network. Proposed system can be object detection irrelevant to obstacle, background and pose rapidly. And neural network calculation time is decreased by reducing input vector size of neural network. Principle Component Analysis can reduce the dimension of data. In the video input in real time from a CCTV was experimented and in case of color segment, the result shows different success rate depending on camera settings. Experimental results show proposed method attains 30% higher recognition performance than the conventional method.

Real-Time Head Tracking using Adaptive Boosting in Surveillance (서베일런스에서 Adaptive Boosting을 이용한 실시간 헤드 트래킹)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.243-248
    • /
    • 2013
  • This paper proposes an effective method using Adaptive Boosting to track a person's head in complex background. By only one way to feature extraction methods are not sufficient for modeling a person's head. Therefore, the method proposed in this paper, several feature extraction methods for the accuracy of the detection head running at the same time. Feature Extraction for the imaging of the head was extracted using sub-region and Haar wavelet transform. Sub-region represents the local characteristics of the head, Haar wavelet transform can indicate the frequency characteristics of face. Therefore, if we use them to extract the features of face, effective modeling is possible. In the proposed method to track down the man's head from the input video in real time, we ues the results after learning Harr-wavelet characteristics of the three types using AdaBoosting algorithm. Originally the AdaBoosting algorithm, there is a very long learning time, if learning data was changes, and then it is need to be performed learning again. In order to overcome this shortcoming, in this research propose efficient method using cascade AdaBoosting. This method reduces the learning time for the imaging of the head, and can respond effectively to changes in the learning data. The proposed method generated classifier with excellent performance using less learning time and learning data. In addition, this method accurately detect and track head of person from a variety of head data in real-time video images.

Comparisons of Color Spaces for Shadow Elimination (그림자 제거를 위한 색상 공간의 비교)

  • Lee, Gwang-Gook;Uzair, Muhammad;Yoon, Ja-Young;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.610-622
    • /
    • 2008
  • Moving object segmentation is an essential technique for various video surveillance applications. The result of moving object segmentation often contains shadow regions caused by the color difference of shadow pixels. Hence, moving object segmentation is usually followed by a shadow elimination process to remove the false detection results. The common assumption adopted in previous works is that, under the illumination variation, the value of chromaticity components are preserved while the value of intensity component is changed. Hence, color transforms which separates luminance component and chromaticity component are usually utilized to remove shadow pixels. In this paper, various color spaces (YCbCr, HSI, normalized rgb, Yxy, Lab, c1c2c3) are examined to find the most appropriate color space for shadow elimination. So far, there have been some research efforts to compare the influence of various color spaces for shadow elimination. However, previous efforts are somewhat insufficient to compare the color distortions under illumination change in diverse color spaces, since they used a specific shadow elimination scheme or different thresholds for different color spaces. In this paper, to relieve the limitations of previous works, (1) the amount of gradients in shadow boundaries drawn to uniform colored regions are examined only for chromaticity components to compare the color distortion under illumination change and (2) the accuracy of background subtraction are analyzed via RoC curves to compare different color spaces without the problem of threshold level selection. Through experiments on real video sequences, YCbCr and normalized rgb color spaces showed good results for shadow elimination among various color spaces used for the experiments.

  • PDF

A Fast Background Subtraction Method Robust to High Traffic and Rapid Illumination Changes (많은 통행량과 조명 변화에 강인한 빠른 배경 모델링 방법)

  • Lee, Gwang-Gook;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.417-429
    • /
    • 2010
  • Though background subtraction has been widely studied for last decades, it is still a poorly solved problem especially when it meets real environments. In this paper, we first address some common problems for background subtraction that occur in real environments and then those problems are resolved by improving an existing GMM-based background modeling method. First, to achieve low computations, fixed point operations are used. Because background model usually does not require high precision of variables, we can reduce the computation time while maintaining its accuracy by adopting fixed point operations rather than floating point operations. Secondly, to avoid erroneous backgrounds that are induced by high pedestrian traffic, static levels of pixels are examined using shot-time statistics of pixel history. By using a lower learning rate for non-static pixels, we can preserve valid backgrounds even for busy scenes where foregrounds dominate. Finally, to adapt rapid illumination changes, we estimated the intensity change between two consecutive frames as a linear transform and compensated learned background models according to the estimated transform. By applying the fixed point operation to existing GMM-based method, it was able to reduce the computation time to about 30% of the original processing time. Also, experiments on a real video with high pedestrian traffic showed that our proposed method improves the previous background modeling methods by 20% in detection rate and 5~10% in false alarm rate.

A Flexible Protection Technique of an Object Region Using Image Blurring (영상 블러링을 사용한 물체 영역의 유연한 보호 기법)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.84-90
    • /
    • 2020
  • As the uploading and downloading of data through the Internet is becoming more common, data including personal information are easily exposed to unauthorized users. In this study, we detect a target area in images that contain personal information, except for the background, and we protect the detected target area by using a blocking method suitable for the surrounding situation. In this method, only the target area from color image input containing personal information is segmented based on skin color. Subsequently, blurring of the corresponding area is performed in multiple stages based on the surrounding situation to effectively block the detected area, thereby protecting the personal information from being exposed. Experimental results show that the proposed method blocks the object region containing personal information 2.3% more accurately than an existing method. The proposed method is expected to be utilized in fields related to image processing, such as video security, target surveillance, and object covering.

Context Driven Real-Time Laser Pointer Detection and Tracking (상황 기반의 실시간 레이저 포인터 검출과 추적)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Park, Yang-Jae;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.10 no.2
    • /
    • pp.211-216
    • /
    • 2012
  • There are two kinks of processes could detect the laser pointer. One is the process which detects the location of the pointer. the other one is a possibility of dividing with the process which converts the coordinate of the laser pointer which is input in coordinate of the monitor. The previous Mean-Shift algorithm is not appropriately for real-time video image to calculate many quantity. In this paper, we proposed the context driven real-time laser pointer detection and tracking. The proposed method is a possibility of getting the result which is fixed from the situation which the background and the background which are complicated dynamically move. In the actual environment, we can get to give constant results when the object come in, when going out at forecast boundary. Ultimately, this paper suggests empirical application to verify the adequacy and the validity with the proposed method. Accordingly, the accuracy and the quality of image recognition will be improved the surveillance system.

Non-parametric Background Generation based on MRF Framework (MRF 프레임워크 기반 비모수적 배경 생성)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.405-412
    • /
    • 2010
  • Previous background generation techniques showed bad performance in complex environments since they used only temporal contexts. To overcome this problem, in this paper, we propose a new background generation method which incorporates spatial as well as temporal contexts of the image. This enabled us to obtain 'clean' background image with no moving objects. In our proposed method, first we divided the sampled frame into m*n blocks in the video sequence and classified each block as either static or non-static. For blocks which are classified as non-static, we used MRF framework to model them in temporal and spatial contexts. MRF framework provides a convenient and consistent way of modeling context-dependent entities such as image pixels and correlated features. Experimental results show that our proposed method is more efficient than the traditional one.

A Study on the Improvement of Military Information Communication Network Efficiency Using CCN (CCN을 활용한 군 정보통신망 효율성 향상 방안)

  • Kim, Hui-Jung;Kwon, Tae-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.799-806
    • /
    • 2020
  • The rapid growth of smartphone-to-Internet of Things (IoT) connections and the explosive demand for data usage centered on mobile video are increasing day by day, and this increase in data usage creates many problems in the IP system. In a full-based environment, in which information requesters focus on information providers to receive information from specific servers, problems arise with bottlenecks and large data processing. To address this problem, CCN networking technology, a future network technology, has emerged as an alternative to CCN networking technology, which reduces bottlenecks that occur when requesting popular content through caching of intermediate nodes and increases network efficiency, and can be applied to military information and communication networks to address the problem of traffic concentration and the use of various surveillance equipment in full-based networks, such as scientific monitoring systems, and to provide more efficient content.

Individual Pig Detection Using Kinect Depth Information (키넥트 깊이 정보를 이용한 개별 돼지의 탐지)

  • Choi, Jangmin;Lee, Jonguk;Chung, Yongwha;Park, Daihee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.10
    • /
    • pp.319-326
    • /
    • 2016
  • Abnormal situation caused by aggressive behavior of pigs adversely affects the growth of pigs, and comes with an economic loss in intensive pigsties. Therefore, IT-based video surveillance system is needed to monitor the abnormal situations in pigsty continuously in order to minimize the economic demage. In this paper, we propose a new Kinect camera-based monitoring system for the detection of the individual pigs. The proposed system is characterized as follows. 1) The background subtraction method and depth-threshold are used to detect only standing-pigs in the Kinect-depth image. 2) The moving-pigs are labeled as regions of interest. 3) A contour method is proposed and applied to solve the touching-pigs problem in the Kinect-depth image. The experimental results with the depth videos obtained from a pig farm located in Sejong illustrate the efficiency of the proposed method.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.