• Title/Summary/Keyword: Multiple-cameras

Search Result 225, Processing Time 0.029 seconds

Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras (AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발)

  • Jin, Youngseok;Jeon, Hyeongcheol;Shin, Young-Nam;Hyun, Eugin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.4
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

A 3D Modeling System Using Multiple Stereo Cameras (다중 스테레오 카메라를 이용한 3차원 모델링 시스템)

  • Kim, Han-Sung;Sohn, Kwang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a new 3D modeling and rendering system using multiple stereo cameras. When target objects are captured by cameras, each capturing PC segments the objects and estimates disparity fields, then they transmit the segmented masks, disparity fields, and color textures of objects to a 3D modeling server. The modeling server generates 3D models of the objects from the gathered masks and disparity fields. Finally, the server generates a video at the designated point of view with the 3D model and texture information from cameras.

CONTINUOUS PERSON TRACKING ACROSS MULTIPLE ACTIVE CAMERAS USING SHAPE AND COLOR CUES

  • Bumrungkiat, N.;Aramvith, S.;Chalidabhongse, T.H.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.136-141
    • /
    • 2009
  • This paper proposed a framework for handover method in continuously tracking a person of interest across cooperative pan-tilt-zoom (PTZ) cameras. The algorithm here is based on a robust non-parametric technique for climbing density gradients to find the peak of probability distributions called the mean shift algorithm. Most tracking algorithms use only one cue (such as color). The color features are not always discriminative enough for target localization because illumination or viewpoints tend to change. Moreover the background may be of a color similar to that of the target. In our proposed system, the continuous person tracking across cooperative PTZ cameras by mean shift tracking that using color and shape histogram to be feature distributions. Color and shape distributions of interested person are used to register the target person across cameras. For the first camera, we select interested person for tracking using skin color, cloth color and boundary of body. To handover tracking process between two cameras, the second camera receives color and shape cues of a target person from the first camera and using linear color calibration to help with handover process. Our experimental results demonstrate color and shape feature in mean shift algorithm is capable for continuously and accurately track the target person across cameras.

  • PDF

Multiple Camera-based Person Correspondence using Color Distribution and Context Information of Human Body (색상 분포 및 인체의 상황정보를 활용한 다중카메라 기반의 사람 대응)

  • Chae, Hyun-Uk;Seo, Dong-Wook;Kang, Suk-Ju;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.9
    • /
    • pp.939-945
    • /
    • 2009
  • In this paper, we proposed a method which corresponds people under the structured spaces with multiple cameras. The correspondence takes an important role for using multiple camera system. For solving this correspondence, the proposed method consists of three main steps. Firstly, moving objects are detected by background subtraction using a multiple background model. The temporal difference is simultaneously used to reduce a noise in the temporal change. When more than two people are detected, those detected regions are divided into each label to represent an individual person. Secondly, the detected region is segmented as features for correspondence by a criterion with the color distribution and context information of human body. The segmented region is represented as a set of blobs. Each blob is described as Gaussian probability distribution, i.e., a person model is generated from the blobs as a Gaussian Mixture Model (GMM). Finally, a GMM of each person from a camera is matched with the model of other people from different cameras by maximum likelihood. From those results, we identify a same person in different view. The experiment was performed according to three scenarios and verified the performance in qualitative and quantitative results.

Analyzing the Influence of Spatial Sampling Rate on Three-dimensional Temperature-field Reconstruction

  • Shenxiang Feng;Xiaojian Hao;Tong Wei;Xiaodong Huang;Pan Pei;Chenyang Xu
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.246-258
    • /
    • 2024
  • In aerospace and energy engineering, the reconstruction of three-dimensional (3D) temperature distributions is crucial. Traditional methods like algebraic iterative reconstruction and filtered back-projection depend on voxel division for resolution. Our algorithm, blending deep learning with computer graphics rendering, converts 2D projections into light rays for uniform sampling, using a fully connected neural network to depict the 3D temperature field. Although effective in capturing internal details, it demands multiple cameras for varied angle projections, increasing cost and computational needs. We assess the impact of camera number on reconstruction accuracy and efficiency, conducting butane-flame simulations with different camera setups (6 to 18 cameras). The results show improved accuracy with more cameras, with 12 cameras achieving optimal computational efficiency (1.263) and low error rates. Verification experiments with 9, 12, and 15 cameras, using thermocouples, confirm that the 12-camera setup as the best, balancing efficiency and accuracy. This offers a feasible, cost-effective solution for real-world applications like engine testing and environmental monitoring, improving accuracy and resource management in temperature measurement.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Viewpoint Invariant Person Re-Identification for Global Multi-Object Tracking with Non-Overlapping Cameras

  • Gwak, Jeonghwan;Park, Geunpyo;Jeon, Moongu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2075-2092
    • /
    • 2017
  • Person re-identification is to match pedestrians observed from non-overlapping camera views. It has important applications in video surveillance such as person retrieval, person tracking, and activity analysis. However, it is a very challenging problem due to illumination, pose and viewpoint variations between non-overlapping camera views. In this work, we propose a viewpoint invariant method for matching pedestrian images using orientation of pedestrian. First, the proposed method divides a pedestrian image into patches and assigns angle to a patch using the orientation of the pedestrian under the assumption that a person body has the cylindrical shape. The difference between angles are then used to compute the similarity between patches. We applied the proposed method to real-time global multi-object tracking across multiple disjoint cameras with non-overlapping field of views. Re-identification algorithm makes global trajectories by connecting local trajectories obtained by different local trackers. The effectiveness of the viewpoint invariant method for person re-identification was validated on the VIPeR dataset. In addition, we demonstrated the effectiveness of the proposed approach for the inter-camera multiple object tracking on the MCT dataset with ground truth data for local tracking.

Real-time Full-view 3D Human Reconstruction using Multiple RGB-D Cameras

  • Yoon, Bumsik;Choi, Kunwoo;Ra, Moonsu;Kim, Whoi-Yul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.224-230
    • /
    • 2015
  • This manuscript presents a real-time solution for 3D human body reconstruction with multiple RGB-D cameras. The proposed system uses four consumer RGB/Depth (RGB-D) cameras, each located at approximately $90^{\circ}$ from the next camera around a freely moving human body. A single mesh is constructed from the captured point clouds by iteratively removing the estimated overlapping regions from the boundary. A cell-based mesh construction algorithm is developed, recovering the 3D shape from various conditions, considering the direction of the camera and the mesh boundary. The proposed algorithm also allows problematic holes and/or occluded regions to be recovered from another view. Finally, calibrated RGB data is merged with the constructed mesh so it can be viewed from an arbitrary direction. The proposed algorithm is implemented with general-purpose computation on graphics processing unit (GPGPU) for real-time processing owing to its suitability for parallel processing.

Panoramic Video Generation Method Based on Foreground Extraction (전경 추출에 기반한 파노라마 비디오 생성 기법)

  • Kim, Sang-Hwan;Kim, Chang-Su
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.441-445
    • /
    • 2011
  • In this paper, we propose an algorithm for generating panoramic videos using fixed multiple cameras. We estimate a background image from each camera. Then we calculate perspective relationships between images using extracted feature points. To eliminate stitching errors due to different image depths, we process background images and foreground images separately in the overlap regions between adjacent cameras by projecting regions of foreground images selectively. The proposed algorithm can be used to enhance the efficiency and convenience of wide-area surveillance systems.

An Image Denoising Algorithm Using Multiple Images for Mobile Smartphone Cameras (스마트폰 카메라에서 다중 영상을 이용한 영상 잡음 제거 알고리즘)

  • Kim, Sung-Un
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.10
    • /
    • pp.1189-1195
    • /
    • 2014
  • In this study we propose an image denoising algorithm which manipulates the information obtained from multiple images in the same environment for mobile smart phones. We also envisage a multiple images registration method for mobile smart phone cameras equipped with limited computing ability and present an effective image denoising algorithm combining and manipulating the information obtained from multiple images. We proved that the proposed algorithm has much better PSNR value than the method applying single image. We verified that the propose approach has good denoising quality and can be utilized in the feasible level speed on Android smart phones.