• Title/Summary/Keyword: Active cameras

Search Result 67, Processing Time 0.033 seconds

Sector Based Multiple Camera Collaboration for Active Tracking Applications

  • Hong, Sangjin;Kim, Kyungrog;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1299-1319
    • /
    • 2017
  • This paper presents a scalable multiple camera collaboration strategy for active tracking applications in large areas. The proposed approach is based on distributed mechanism but emulates the master-slave mechanism. The master and slave cameras are not designated but adaptively determined depending on the object dynamic and density distribution. Moreover, the number of cameras emulating the master is not fixed. The collaboration among the cameras utilizes global and local sectors in which the visual correspondences among different cameras are determined. The proposed method combines the local information to construct the global information for emulating the master-slave operations. Based on the global information, the load balancing of active tracking operations is performed to maximize active tracking coverage of the highly dynamic objects. The dynamics of all objects visible in the local camera views are estimated for effective coverage scheduling of the cameras. The active tracking synchronization timing information is chosen to maximize the overall monitoring time for general surveillance operations while minimizing the active tracking miss. The real-time simulation result demonstrates the effectiveness of the proposed method.

An Adaptive Switching Mechanism for Three-Dimensional Hybrid Cameras (하이브리드 입체 카메라의 적응적인 스위칭 메커니즘)

  • Jang, Seok-Woo;Choi, Hyun-Jun;Lee, Suk-Yun;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.1459-1466
    • /
    • 2013
  • Recently, various types of three-dimensional cameras have been used to analyze surrounding environments. In this paper, we suggest a mechanism of adaptively switching active and passive cameras of hybrid cameras, which can extract 3D image information more accurately. The suggested method first obtains brightness and texture features representing the environment from input images. It then adaptively selects active and passive cameras by generating rules that reflect the extracted features. In experimental results, we show that a hybrid 3D camera consisting of passive and active cameras is set up and the proposed method can effectively choose appropriate cameras in the hybrid camera and make it possible to extract three dimensional information more accurately.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • Kim, Jin Dae;Lee, Jae Won;Sin, Chan Bae
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.83-83
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot′s end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • 김진대;이재원;신찬배
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.89-90
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot's end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

CONTINUOUS PERSON TRACKING ACROSS MULTIPLE ACTIVE CAMERAS USING SHAPE AND COLOR CUES

  • Bumrungkiat, N.;Aramvith, S.;Chalidabhongse, T.H.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.136-141
    • /
    • 2009
  • This paper proposed a framework for handover method in continuously tracking a person of interest across cooperative pan-tilt-zoom (PTZ) cameras. The algorithm here is based on a robust non-parametric technique for climbing density gradients to find the peak of probability distributions called the mean shift algorithm. Most tracking algorithms use only one cue (such as color). The color features are not always discriminative enough for target localization because illumination or viewpoints tend to change. Moreover the background may be of a color similar to that of the target. In our proposed system, the continuous person tracking across cooperative PTZ cameras by mean shift tracking that using color and shape histogram to be feature distributions. Color and shape distributions of interested person are used to register the target person across cameras. For the first camera, we select interested person for tracking using skin color, cloth color and boundary of body. To handover tracking process between two cameras, the second camera receives color and shape cues of a target person from the first camera and using linear color calibration to help with handover process. Our experimental results demonstrate color and shape feature in mean shift algorithm is capable for continuously and accurately track the target person across cameras.

  • PDF

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.1
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Optimum Region-of-Interest Acquisition for Intelligent Surveillance System using Multiple Active Cameras

  • Kim, Young-Ouk;Park, Chang-Woo;Sung, Ha-Gyeong;Park, Chang-Han;Namkung, Jae-Chan
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.628-631
    • /
    • 2003
  • In this paper, we present real-time, accurate face region detection and tracking technique for an intelligent surveillance system. It is very important to obtain the high-resolution images, which enables accurate identification of an object-of-interest. Conventional surveillance or security systems, however, usually provide poor image quality because they use one or more fixed cameras and keep recording scenes without any cine. We implemented a real-time surveillance system that tracks a moving person using four pan-tilt-zoom (PTZ) cameras. While tracking, the region-of-interest (ROI) can be obtained by using a low-pass filter and background subtraction. Color information in the ROI is updated to extract features for optimal tracking and zooming. The experiment with real human faces showed highly acceptable results in the sense of both accuracy and computational efficiency.

  • PDF

Object Tracking System Using Kalman Filter (칼만 필터를 이용한 물체 추적 시스템)

  • Xu, Yanan;Ban, Tae-Hak;Yuk, Jung-Soo;Park, Dong-Won;Jung, Hoe-kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1015-1017
    • /
    • 2013
  • Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, non-rigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location or the shape of the object in every frame. This paper describes an object tracking system based on active vision with two cameras, into algorithm of single camera tracking system an object active visual tracking and object locked system based on Extend Kalman Filter (EKF) is introduced, by analyzing data from which the next running state of the object can be figured out and after the tracking is performed at each of the cameras, the individual tracks are to be fused (combined) to obtain the final system object track.

  • PDF

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.