• 제목/요약/키워드: Camera View

검색결과 826건 처리시간 0.032초

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • 제1권1호
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM

  • Jung, Jae-Il;Ho, Yo-Sung
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.1-6
    • /
    • 2009
  • Due to the different camera properties of the multi-view camera system, the color properties of captured images can be inconsistent. This inconsistency makes post-processing such as depth estimation, view synthesis and compression difficult. In this paper, the method to correct the different color properties of multi-view images is proposed. We utilize a gray gradient bar on a display device to extract the color sensitivity property of the camera and calculate a look-up table based on the sensitivity property. The colors in the target image are converted by mapping technique referring to the look-up table. Proposed algorithm shows the good subjective results and reduces the mean absolute error among the color values of multi-view images by 72% on average in experimental results.

  • PDF

다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구 (A Study on the 3D Video Generation Technique using Multi-view and Depth Camera)

  • 엄기문;장은영;허남호;이수인
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

CGH를 위한 가상시점 깊이영상 합성 시스템 (Virtual View-point Depth Image Synthesis System for CGH)

  • 김택범;고민수;유지상
    • 한국정보통신학회논문지
    • /
    • 제16권7호
    • /
    • pp.1477-1486
    • /
    • 2012
  • 본 논문에서는 가상시점 깊이영상 생성 기법을 이용한 다양한 시점의 CGH(computer generated hologram) 생성 시스템을 제안한다. 제안한 시스템에서는 먼저 TOF(time of flight) 깊이 카메라를 이용하여 신뢰도 높은 기준시점 깊이영상을 획득하고 카메라 보정 과정을 통해 기준시점 카메라들의 카메라 파라미터를 추출한다. 가상시점 카메라의 위치가 정의되면 기준시점 카메라와의 거리와 위치를 고려하여 최적의 기준시점 카메라들을 선택한다. 가상시점 카메라와 가장 가까운 기준시점 카메라를 주 기준시점으로 결정하고 가상시점 깊이영상을 생성한다. 주 기준시점 카메라와 위치가 반대인 기준시점 카메라를 보조 기준시점으로 선택하여 가상시점 깊이영상을 생성한다. 주 기준시점을 통해 생성된 가상시점 깊이영상에 나타나는 가려짐 영역을 보조 기준시점으로 생성된 가상시점 깊이영상으로부터 찾아 보상한다. 보상이 되지 않고 남은 홀영역은 주변값 중 가장 작은 값으로 채워 최종 가상시점 깊이영상을 생성한다. 최종 가상시점 깊이 영상을 이용하여 CGH를 생성한다. 실험을 통해 기존의 기법보다 제안하는 가상시점 깊이영상 합성 시스템의 성능이 우수함을 확인하였다.

Sensor Density for Full-View Problem in Heterogeneous Deployed Camera Sensor Networks

  • Liu, Zhimin;Jiang, Guiyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4492-4507
    • /
    • 2021
  • In camera sensor networks (CSNs), in order to better identify the point, full-view problem requires capture any facing direction of target (point or intruder), and its coverage prediction and sensor density issues are more complicated. At present, a lot of research supposes that a large number of homogeneous camera sensors are randomly distributed in a bounded square monitoring region to obtain full-view rate which is close to 1. In this paper, we deduce the sensor density prediction model in heterogeneous deployed CSNs with arbitrary full-view rate. Aiming to reduce the influence of boundary effect, we introduce the concepts of expanded monitoring region and maximum detection area. Besides, in order to verify the performance of the proposed sensor density model, we carried out different scenarios in simulation experiments to verify the theoretical results. The simulation results indicate that the proposed model can effectively predict the sensor density with arbitrary full-view rate.

시야 확장형 적외선카메라 설계 (Design of Infrared Camera for Extended Field of View)

  • 이용춘;송천호;김상운;김영길
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2017년도 추계학술대회
    • /
    • pp.699-701
    • /
    • 2017
  • 장거리 관측용 카메라의 일반적인 운용방법은 광각에서 표적에 대한 탐지를 하며, 망원으로 표적에 대한 인지/식별을 하게 된다. 탐지/인지거리 성능은 방산용 적외선카메라의 성능을 평가하는 중요한 항목이다. 탐지거리 성능이 증가하려면 카메라의 시야가 좁아져야 하고, 좁은 시야각으로 인하여 표적을 찾게 될 확률이 상대적으로 낮아지게 된다. 본 연구에서는 탐지거리 성능을 유지하면서 넓은 시야각을 제공하여 표적에 대한 탐색을 용이하게 할 수 있는 방안을 검토하였다. 그리고 M&S 및 최적화 설계를 통하여 시야 확장형 적외선카메라를 제작하고 시험한 결과를 정리하였다.

  • PDF

다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM (Omni-directional Visual-LiDAR SLAM for Multi-Camera System)

  • 지샨 자비드;김곤우
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • 제9권2호
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • 제45권2호
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

가상환경기반 원격작업자 시각지원시스템 개발 및 시험 (Development and Test of the Remote Operator Visual Support System Based on Virtual Environment)

  • 송태길;박병석;최경현;이상호
    • 한국CDE학회논문집
    • /
    • 제13권6호
    • /
    • pp.429-439
    • /
    • 2008
  • With a remote operated manipulator system, the situation at a remote site can be rendered through remote visualized image to the operator. Then the operator can quickly realize situations and control the slave manipulator by operating a master input device based on the information of the virtual image. In this study, the remote operator visual support system (ROVSS) was developed for viewing support of a remote operator to perform the remote task effectively. A visual support model based on virtual environment was also inserted and used to fulfill the need of this study. The framework for the system was created by Windows API based on PC and the library of 3D graphic simulation tool such as ENVISION. To realize this system, an operation test environment for a limited operating site was constructed by using experimental robot operation. A 3D virtual environment was designed to provide accurate information about the rotation of robot manipulator, the location and distance of operation tool through the real time synchronization. In order to show the efficiency of the visual support, we conducted the experiments by four methods such as the direct view, the camera view, the virtual view and camera view plus virtual view. The experimental results show that the method of camera view plus virtual view has about 30% more efficiency than the method of camera view.