• Title/Summary/Keyword: multi-cameras

Search Result 255, Processing Time 0.024 seconds

Development of 3-D Stereo PIV (3차원 스테레오 PIV 개발)

  • Kim Mi-Young;Choi Jang-Woon;Nam Koo-Man;Lee Young-Ho
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.19-22
    • /
    • 2002
  • A process of 3-D particle image velocimetry, called here, as '3-D stereo PIV' was developed for the measurement of a section field of 3-D complex flows. The present method includes modeling of camera by a calibrator based on the homogeneous coordinate system, transfromation of oblique-angled image to transformed image, identification of 2-D velocity vectors by 2-D cross-correlation equation, stereo matching of 2-D velocity vectors of two cameras, accurate calculation of 3-D velocity vectors by homogeneous coordinate system and finally 3-D animation as the post processing. In principle, as two frame images only are necessary for the single instantaneous analysis of a section field of 3-D flow, more effective vectors are obtainable contrary to the previous multi-frame vector algorithm. An experimental system was also used for the application of the proposed method. Three analog CCD cameras and an Argon-Ion Laser(300mW) for illumination were adopted to capture the wake flow behind a bluff obstacle.

  • PDF

Water Detection in an Open Environment: A Comprehensive Review

  • Muhammad Abdullah, Sandhu;Asjad, Amin;Muhammad Ali, Qureshi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • Open surface water body extraction is gaining popularity in recent years due to its versatile applications. Multiple techniques are used for water detection based on applications. Different applications of Radar as LADAR, Ground-penetrating, synthetic aperture, and sounding radars are used to detect water. Shortwave infrared, thermal, optical, and multi-spectral sensors are widely used to detect water bodies. A stereo camera is another way to detect water and different methods are applied to the images of stereo cameras such as deep learning, machine learning, polarization, color variations, and descriptors are used to segment water and no water areas. The Satellite is also used at a high level to get water imagery and the captured imagery is processed using various methods such as features extraction, thresholding, entropy-based, and machine learning to find water on the surface. In this paper, we have summarized all the available methods to detect water areas. The main focus of this survey is on water detection especially in small patches or in small areas. The second aim of this survey is to detect water hazards for unmanned vehicles and off-sure navigation.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Moving Object Tracking Using MHI and M-bin Histogram (MHI와 M-bin Histogram을 이용한 이동물체 추적)

  • Oh, Youn-Seok;Lee, Soon-Tak;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.1
    • /
    • pp.48-55
    • /
    • 2005
  • In this paper, we propose an efficient moving object tracking technique for multi-camera surveillance system. Color CCD cameras used in this system are network cameras with their own IP addresses. Input image is transmitted to the media server through wireless connection among server, bridge, and Access Point (AP). The tracking system sends the received images through the network to the tracking module, and it tracks moving objects in real-time using color matching method. We compose two sets of cameras, and when the object is out of field of view (FOV), we accomplish hand-over to be able to continue tracking the object. When hand-over is performed, we use MHI(Motion History Information) based on color information and M-bin histogram for an exact tracking. By utilizing MHI, we can calculate direction and velocity of the object, and those information helps to predict next location of the object. Therefore, we obtain a better result in speed and stability than using template matching based on only M-bin histogram, and we verified this result by an experiment.

  • PDF

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

Web-based Real-time 3D Video Communication System for Reality Teleconferencing

  • Ko, Jung-Hwan;Kim, Dong-Kyu;Hwang, Dong-Chun;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2005.07b
    • /
    • pp.1611-1614
    • /
    • 2005
  • In this paper, a new multi-view 3D video communication system for real-time Reality teleconferencing application is proposed by usin gthe IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. The captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the communication network, in which multi-view could be synthesized with this received 2-view data using the intermediate view reconstruction technique and displayed on the multi-view 3D display system. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps.

  • PDF

Implementation of Real-Time Multi-Camera Video Surveillance System with Automatic Resolution Control Using Motion Detection (움직임 감지를 사용하여 영상 해상도를 자동 제어하는 실시간 다중 카메라 영상 감시 시스템의 구현)

  • Jung, Seulkee;Lee, Jong-Bae;Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.612-619
    • /
    • 2014
  • This paper proposes a real-time multi-camera video surveillance system with automatic resolution control using motion detection. In ordinary times, it acquires 4 channels of QVGA images, and it merges them into single VGA image and transmit it. When motion is detected, it automatically increases the resolution of motion-occurring channel to VGA and decreases those of 3 other channels to QQVGA, and then these images are overlaid and transmitted. Thus, it can magnifies and watches the motion-occurring channel while maintaining transmission bandwidth and monitoring all other channels. When it is synthesized with 0.18 um technology, the maximum operating frequency is 110 MHz, which can theoretically support 4 HD cameras.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

An Experimental Study of Injection Molding for Multi-beam Sensing Lens Using The Change of Gate Geometry (금형 게이트 크기 변화에 따른 멀티빔 센서용 렌즈 사출성형성 향상에 관한 연구)

  • Cho, S.W.;Kim, J.S.;Yoon, K.H.;Kim, J.D.
    • Transactions of Materials Processing
    • /
    • v.20 no.5
    • /
    • pp.333-338
    • /
    • 2011
  • Rapidly developing IT technologies in recent years have raised the demands for high-precision optical lenses used for sensors, digital cameras, cell phones and optical storage media. Many techniques are required to manufacturing high-precision optical lenses, including multi-beam sensing lenses investigated in the current study. In the case of injection molding for thick lenses, a shrinkage phenomenon often occurs during the process. This shrinkage is known to be the main reason for the lower optical quality of the lenses. In the present work, a CAE analysis was conducted simultaneously with experiments to understand and minimize this phenomenon. In particular, the sectional area of a gate was varied in order to understand the effects of packing and cooling processes on the final shrinkage pattern. As a result of this study, it was demonstrated that a dramatic reduction of the shrinkage could be obtained by increasing the width of the gate.