• Title/Summary/Keyword: virtual view rendering

Search Result 53, Processing Time 0.034 seconds

Performance Analysis on View Synthesis of 360 Video for Omnidirectional 6DoF

  • Kim, Hyun-Ho;Lee, Ye-Jin;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.22-24
    • /
    • 2018
  • MPEG-I Visual group is actively working on enhancing immersive experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene from another viewpoint (another position in space) requires rendering additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance analysis on view synthesis, which is done as the exploration experiment (EE) in MPEG-I, from a set of 360 videos providing omnidirectional 6DoF in various ways with different distances, directions, and number of input views. In addition, we compared the subjective quality between synthesized images with one input view and two input views.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.

Novel View Generation Using Affine Coordinates

  • Sengupta, Kuntal;Ohya, Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.125-130
    • /
    • 1997
  • In this paper we present an algorithm to generate new views of a scene, starting with images from weakly calibrated cameras. Errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a direct method for reprojection. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. We borrow ideas from the object recognition literature, and extend them significantly to solve the problem of reprojection. Unlike the epipolar line intersection algorithms for reprojection which requires at least eight matched points across three images, we need only five matched points. The theory of reprojection is used with hardware based rendering to achieve fast rendering. We demonstrate our results of novel view generation from stereopairs for arbitrary locations of the virtual camera.

  • PDF

View synthesis with sparse light field for 6DoF immersive video

  • Kwak, Sangwoon;Yun, Joungil;Jeong, Jun-Young;Kim, Youngwook;Ihm, Insung;Cheong, Won-Sik;Seo, Jeongil
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Virtual view synthesis, which generates novel views similar to the characteristics of actually acquired images, is an essential technical component for delivering an immersive video with realistic binocular disparity and smooth motion parallax. This is typically achieved in sequence by warping the given images to the designated viewing position, blending warped images, and filling the remaining holes. When considering 6DoF use cases with huge motion, the warping method in patch unit is more preferable than other conventional methods running in pixel unit. Regarding the prior case, the quality of synthesized image is highly relevant to the means of blending. Based on such aspect, we proposed a novel blending architecture that exploits the similarity of the directions of rays and the distribution of depth values. By further employing the proposed method, results showed that more enhanced view was synthesized compared with the well-designed synthesizers used within moving picture expert group (MPEG-I). Moreover, we explained the GPU-based implementation synthesizing and rendering views in the level of real time by considering the applicability for immersive video service.

An Efficient Perspective Projection using $\textrm{VolumePro}^{TM}$ Hardware (볼륨프로 하드웨어를 이용한 효율적인 투시투영 방법)

  • 임석현;신병석
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.195-203
    • /
    • 2004
  • VolumePro is a real-time volume rendering hardware for consumer PCs. However it cannot be used for the applications requiring perspective projection such as virtual endoscopy since it provides only orthographic projection. Several methods have been presented to approximate perspective projection by decomposing a volume into slabs and applying successive parallel projection to thou. But it takes a lot of time since the entire region of every slab should be processed, which does not contribute to final image. In this paper, we propose an efficient perspective projection method that makes the use of several sub-volumes with cropping feature of VolumePro. It reduces the rendering time in comparison to slab-based method without image quality deterioration since it processes only the parts contained in the view frustum.

Development of a PC based Simulator for Excavator Manipulation using Virtual Reality (PC기반의 가상현실을 이용한 굴삭기 조작 시뮬레이터 개발)

  • Lee, Se-Bok;Kim, In-Shik;Cho, Chang-Hee;Kim, Sung-Soo
    • Proceedings of the KSME Conference
    • /
    • 2000.04a
    • /
    • pp.536-541
    • /
    • 2000
  • A low cost PC based simulator for excavator manipulation has been developed using virtual reality technology. The simulator consists of two joystick input devices, server and client PCs, an excavator kinematics module, and a graphic rendering program Open Inventor. In order to use two joysticks in the PC window environment multi-thread programing with network protocol TCP/IP has been used. To provide realistic view to the operator, CAD program Pro/Engineer and 3D modeller have been employed to create 3D part geometry of tile manipulator and virtual environmental geometries. Those geometries also have been transformed and imported to the Open Inventor. The Simulator developed is to be improved for more realistic excavator operational training.

  • PDF

Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I (MPEG-I의 6DoF를 위한 360 비디오 가상시점 합성 성능 분석)

  • Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.273-280
    • /
    • 2019
  • 360 video is attracting attention as immersive media with the spread of VR applications, and MPEG-I (Immersive) Visual group is actively working on standardization to support immersive media experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene at any viewpoint of any position in the space requires rendering the view by synthesizing additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance results on view synthesis and their analysis, which have been done as exploration experiments (EEs) of omnidirectional 6DoF in MPEG-I. In other words, experiment results on view synthesis in various aspects of synthesis conditions such as the distances between input views and virtual view to be synthesized and the number of input views to be selected from the given set of 360 videos providing omnidirectional 6DoF are presented.

Virtual View Rendering for 2D/3D Freeview Video Generation (2차원/3차원 자유시점 비디오 재생을 위한 가상시점 합성시스템)

  • Min, Dong-Bo;Sohn, Kwang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.22-31
    • /
    • 2008
  • In this paper, we propose a new approach for efficient multiview stereo matching and virtual view generation, which are key technologies for 3DTV. We propose semi N-view & N-depth framework to estimate disparity maps efficiently and correctly. This framework reduces the redundancy on disparity estimation by using the information of neighboring views. The proposed method provides a user 2D/3D freeview video, and the user can select 2D/3D modes of freeview video. Experimental results show that the proposed method yields the accurate disparity maps and the synthesized novel view is satisfactory enough to provide user seamless freeview videos.

Virtual pencil and airbrush rendering algorithm using particle patch (입자 패치 기반 가상 연필 및 에어브러시 가시화 알고리즘)

  • Lee, Hye Rin;Oh, Geon;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.101-109
    • /
    • 2018
  • Recently, the improvement of virtual reality and augmented reality technologies leverages many new technologies like the virtual study room, virtual architecture room. Such virtual worlds require free handed drawing technology such as writing descriptions of formula or drawing blue print of buildings. In nature, lots of view point modifications occur when we walk around inside the virtual world. Especially, we often look some objects from near to far distance in the virtual world. Traditional drawing methods like using fixed size image for drawing unit is not produce acceptable result because they generate blurred and jaggy result as view distance varying. We propose a novel method which robust to the environment that produce lots of magnifications and minimizations like the virtual reality world. We implemented our algorithm both two dimensional and three dimensional devices. Our algorithm does not produce any artifacts, jaggy or blurred result regardless of scaling factor.

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.