• Title/Summary/Keyword: multi-cameras

Search Result 255, Processing Time 0.03 seconds

Design of FPGA Camera Module with AVB based Multi-viewer for Bus-safety (AVB 기반의 버스안전용 멀티뷰어의 FPGA 카메라모듈 설계)

  • Kim, Dong-jin;Shin, Wan-soo;Park, Jong-bae;Kang, Min-goo
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.11-17
    • /
    • 2016
  • In this paper, we proposed a multi-viewer system with multiple HD cameras based AVB(Audio Video Bridge) ethernet cable using IP networking, and FPGA(Xilinx Zynq 702) for bus safety systems. This AVB (IEEE802.1BA) system can be designed for the low latency based on FPGA, and transmit real-time with HD video and audio signals in a vehicle network. The proposed multi-viewer platform can multiplex H.264 video signals from 4 wide-angle HD cameras with existed ethernet 1Gbps. and 2-wire 100Mbps cables. The design of Zynq 702 based low latency to H.264 AVC CODEC was proposed for the minimization of time-delay in the HD video transmission of car area network, too. And the performance of PSNR(Peak Signal-to-noise-ratio) was analyzed with the reference model JM for encoding and decoding results in H.264 AVC CODEC. These PSNR values can be confirmed according the theoretical and HW result from the signal of H.264 AVC CODEC based on Zynq 702 the multi-viewer with multiple cameras. As a result, proposed AVB multi-viewer platform with multiple cameras can be used for the surveillance of audio and video around a bus for the safety due to the low latency of H.264 AVC CODEC design.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Authoring Personal Virtual Studio Using Tangible Augmented Reality (탠저블 증강현실을 활용한 개인용 가상스튜디오 저작)

  • Rhee, Gue-Won;Lee, Jae-Yeol;Nam, Ji-Seung;Hong, Sung-Hoon
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.2
    • /
    • pp.77-88
    • /
    • 2008
  • Nowadays personal users create a variety of multi-media contents and share them with others through various devices over the Internet since the concept of user created content (UCC) has been widely accepted as a new paradigm in today's multi-media market, which has broken the boundary of contents providers and consumers. This paradigm shift has also introduced a new business model that makes it possible for them to create their own multi-media contents for commercial purpose. This paper proposes a tangible virtual studio using augmented reality to author multi-media contents easily and intuitively for personal broadcasting and personal content generation. It provides a set of tangible interfaces and devices such as visual markers, cameras, movable and rotatable arms carrying cameras, and miniaturized set. They can offer an easy-to-use interface in an immersive environment and an easy switching mechanism between tangible environment and virtual environment. This paper also discusses how to remove inconsistency between real objects and virtual objects during the AR-enabled visualization with a context-adaptable tracking method. The context-adaptable tracking method not only adjusts the locations of invisible markers by interpolating the locations of existing reference markers, but also removes a jumping effect of movable virtual objects when their references are changed from one marker to another.

Build a Multi-Sensor Dataset for Autonomous Driving in Adverse Weather Conditions (열악한 환경에서의 자율주행을 위한 다중센서 데이터셋 구축)

  • Sim, Sungdae;Min, Jihong;Ahn, Seongyong;Lee, Jongwoo;Lee, Jung Suk;Bae, Gwangtak;Kim, Byungjun;Seo, Junwon;Choe, Tok Son
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.245-254
    • /
    • 2022
  • Sensor dataset for autonomous driving is one of the essential components as the deep learning approaches are widely used. However, most driving datasets are focused on typical environments such as sunny or cloudy. In addition, most datasets deal with color images and lidar. In this paper, we propose a driving dataset with multi-spectral images and lidar in adverse weather conditions such as snowy, rainy, smoky, and dusty. The proposed data acquisition system has 4 types of cameras (color, near-infrared, shortwave, thermal), 1 lidar, 2 radars, and a navigation sensor. Our dataset is the first dataset that handles multi-spectral cameras in adverse weather conditions. The Proposed dataset is annotated as 2D semantic labels, 3D semantic labels, and 2D/3D bounding boxes. Many tasks are available on our dataset, for example, object detection and driveable region detection. We also present some experimental results on the adverse weather dataset.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

A Surveillance System Combining Model-based Multiple Person Tracking and Non-overlapping Cameras (모델기반 다중 사람추적과 다수의 비겹침 카메라를 결합한 감시시스템)

  • Lee Youn-Mi;Lee Kyoung-Mi
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.241-253
    • /
    • 2006
  • In modem societies, a monitoring system is required to automatically detect and track persons from several cameras scattered in a wide area. Combining multiple cameras with non-overlapping views and a tracking technique, we propose a method that tracks automatically the target persons in one camera and transfers the tracking information to other networked cameras through a server. So the proposed method tracks thoroughly the target persons over the cameras. In this paper, we use a person model to detect and distinguish the corresponding person and to transfer the person's tracking information. A movement of the tracked persons is defined on FOV lines of the networked cameras. The tracked person has 6 statuses. The proposed system was experimented in several indoor scenario. We achieved 91.2% in an averaged tracking rate and 96% in an averaged status rate.

A Method for Surface Reconstruction and Synthesizing Intermediate Images for Multi-viewpoint 3-D Displays

  • Fujii, Mahito;Ito, Takayuki;Miyake, Sei
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.35-40
    • /
    • 1996
  • In this paper, a method for 3-D surface reconstruction with two real cameras is presented. The method, which combines the extraction of binocular disparity and its interpolation can be applied to the synthesis of images from virtual viewpoints. The synthesized virtual images are as natural as the real images even when we observe the images as stereoscopic images. The method opens up many applications, such as synthesizing input images for multi-viewpoint 3-D displays, enhancing the depth impression in 2-D images and so on. We also have developed a video-rate stereo machine able to obtain binocular disparity in 1/30 sec with two cameras. We show the performance of the machine.

  • PDF

Multi-camera based Images through Feature Points Algorithm for HDR Panorama

  • Yeong, Jung-Ho
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.6-13
    • /
    • 2015
  • With the spread of various kinds of cameras such as digital cameras and DSLR and a growing interest in high-definition and high-resolution images, a method that synthesizes multiple images is being studied among various methods. High Dynamic Range (HDR) images store light exposure with even wider range of number than normal digital images. Therefore, it can store the intensity of light inherent in specific scenes expressed by light sources in real life quite accurately. This study suggests feature points synthesis algorithm to improve the performance of HDR panorama recognition method (algorithm) at recognition and coordination level through classifying the feature points for image recognition using more than one multi frames.

Development of Multi-Function Image Splitter (다기능 영상분할기 개발에 관한 연구)

  • Cho, Duk-Sang;Rho, Kwang-Hyun;Han, Min-Hong;Lee, Jae-Il
    • IE interfaces
    • /
    • v.12 no.2
    • /
    • pp.174-179
    • /
    • 1999
  • This paper describes the development of low cost, miniaturized multi-function image splitter for an unmanned guard system. The essential equipments of an image splitter system consist of a video system which displays in one screen images inputted by several cameras, a frame switcher which records each image sequentially in VTR, and TBC(Time Base Control) system which prevents image flickering during image switching. Currently, the price of similar system on the market is high, system management and repair are difficult, and bulky space is required. Futhermore, since currently available frame switcher can record only 30frame/sec, if eight cameras are installed, each camera image is recorded at the low speed of 3.75frame/sec, and consequently skip some images which may be vital for legal evidence. In this research, we have solved the problem of low speed recording by recording the video screen of splitted images at 30frame/sec and zooming each image when play back.

  • PDF

Design of Video Processor for Multi-View 3D Display (다시점 3차원 디스플레이용 비디오 프로세서의 설계)

  • 성준호;하태현;김성식;이성주;김재석
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.452-464
    • /
    • 2003
  • In this paper, a multi-view 3D video processor was designed and implemented with several FPGAs for real-time applications. The 3D video processor receives 2D images from cameras (up to 16 cameras) and converts then to 3D video format for space-multiplexed 3D display. It can cope with various arrangements of 3D camera systems (or pixel arrays) and resolutions of 3D display. Tn order to verify the functions of 3D video Processor. some evaluation-board were made with five FPGAs.