• Title/Summary/Keyword: multi-view camera

Search Result 159, Processing Time 0.03 seconds

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

The Improved Joint Bayesian Method for Person Re-identification Across Different Camera

  • Hou, Ligang;Guo, Yingqiang;Cao, Jiangtao
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.785-796
    • /
    • 2019
  • Due to the view point, illumination, personal gait and other background situation, person re-identification across cameras has been a challenging task in video surveillance area. In order to address the problem, a novel method called Joint Bayesian across different cameras for person re-identification (JBR) is proposed. Motivated by the superior measurement ability of Joint Bayesian, a set of Joint Bayesian matrices is obtained by learning with different camera pairs. With the global Joint Bayesian matrix, the proposed method combines the characteristics of multi-camera shooting and person re-identification. Then this method can improve the calculation precision of the similarity between two individuals by learning the transition between two cameras. For investigating the proposed method, it is implemented on two compare large-scale re-ID datasets, the Market-1501 and DukeMTMC-reID. The RANK-1 accuracy significantly increases about 3% and 4%, and the maximum a posterior (MAP) improves about 1% and 4%, respectively.

Design and Implementation of PC-Mechanic Education Application System Using Image Processing (영상처리를 이용한 PC 내부구조 학습 어플리케이션 설계 및 구현)

  • Kim, Won-Jin;Kim, Hyung-Ook;Jo, Sung-Eun;Jang, Soo-Jeong;Moon, Il-Young
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.3 no.2
    • /
    • pp.93-99
    • /
    • 2011
  • We introduce the application what using the MultiTouch-Table of the PC-mechanic Certification. Thesedays, People does't use the Mouse and Keyboard and use people gesture. We introduce Graphic and Image by addition. Theseday, MultiTouch-Table is so famous. We use it the multitouch-table to on 3D Maxs and C#. We help them to get the certification using the component Scale and Drags through the camera view and then include the PC-Mechanic question of domestic.

  • PDF

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Feature based Pre-processing Method to compensate color mismatching for Multi-view Video (다시점 비디오의 색상 성분 보정을 위한 특징점 기반의 전처리 방법)

  • Park, Sung-Hee;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2527-2533
    • /
    • 2011
  • In this paper we propose a new pre-processing algorithm applied to multi-view video coding using color compensation algorithm based on image features. Multi-view images have a difference between neighboring frames according to illumination and different camera characteristics. To compensate this color difference, first we model the characteristics of cameras based on frame's feature from each camera and then correct the color difference. To extract corresponding features from each frame, we use Harris corner detection algorithm and characteristic coefficients used in the model is estimated by using Gauss-Newton algorithm. In this algorithm, we compensate RGB components of target images, separately from the reference image. The experimental results with many test images show that the proposed algorithm peformed better than the histogram based algorithm as much as 14 % of bit reduction and 0.5 dB ~ 0.8dB of PSNR enhancement.

A depth-based Multi-view Super-Resolution Method Using Image Fusion and Blind Deblurring

  • Fan, Jun;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Feng, Jing;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5129-5152
    • /
    • 2016
  • Multi-view super-resolution (MVSR) aims to estimate a high-resolution (HR) image from a set of low-resolution (LR) images that are captured from different viewpoints (typically by different cameras). MVSR is usually applied in camera array imaging. Given that MVSR is an ill-posed problem and is typically computationally costly, we super-resolve multi-view LR images of the original scene via image fusion (IF) and blind deblurring (BD). First, we reformulate the MVSR problem into two easier problems: an IF problem and a BD problem. We further solve the IF problem on the premise of calculating the depth map of the desired image ahead, and then solve the BD problem, in which the optimization problems with respect to the desired image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Our approach bridges the gap between MVSR and BD, taking advantages of existing BD methods to address MVSR. Thus, this approach is appropriate for camera array imaging because the blur kernel is typically unknown in practice. Corresponding experimental results using real and synthetic images demonstrate the effectiveness of the proposed method.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Flexible GGOP prediction structure for multi-view video coding (다시점 동영상 부호화를 위한 가변형 다시점GOP 예측 구조)

  • Yoon, Jae-Won;Seo, Jung-Dong;Kim, Yong-Tae;Park, Chang-Seob;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.420-430
    • /
    • 2006
  • In this paper, we propose a flexible GGOP prediction structure to improve coding efficiency for multi-view video coding. In general, reference software used for MVC uses the fixed GGOP prediction structure. However, the performance of MVC depends on the base view and numbers of B-pictures between I-picture(or P-picture) and P-picture. In order to implement the flexible GGOP prediction structure, the location of base view is decided according to the global disparities among the adjacent sequences. Numbers of B-pictures between I-picture(or P-picture) and P-picture are decided by camera arrangement such as the baseline distance among the cameras. The proposed method shows better result than the reference software of MVC. The proposed prediction structure shows considerable reduction of coded bits by 7.1%.

Analysis of Image and Development of UV Corona Camera for High-Voltage Discharge Detection (고전압 방전 검출용 자외선 코로나 카메라 개발 및 방전 이미지 분석)

  • Kim, Young-Seok;Shong, Kil-Mok;Bang, Sun-Bae;Kim, Chong-Min;Choi, Myeong-Il
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.25 no.9
    • /
    • pp.69-74
    • /
    • 2011
  • In this paper, the UV corona camera was developed using the solar blind and Multi Channel Plate(MCP) technology for the target localization of UV image. UV camera developed a $6.4[^{\circ}]{\times}4.8[^{\circ}]$ of the field of view as a conventional camera to diagnose a wide range of slightly enlarged, and power equipment to measure the distance between the camera and the distance meter has been attached. UV camera to measure the discharge count and the UV image was developed, compared with a commercial camera, there was no significant difference. In salt spray environments breakdown voltage was lower than the normal state, thereby discharging the image was rapidly growing phenomenon.

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF