• Title/Summary/Keyword: Virtual Two-View Method

Search Result 49, Processing Time 0.032 seconds

VIRTUAL VIEW RENDERING USING MULTIPLE STEREO IMAGES

  • Ham, Bum-Sub;Min, Dong-Bo;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.233-237
    • /
    • 2009
  • This paper represents a new approach which addresses quality degradation of a synthesized view, when a virtual camera moves forward. Generally, interpolation technique using only two neighboring views is used when a virtual view is synthesized. Because a size of the object increases when the virtual camera moves forward, most methods solved this by interpolation in order to synthesize a virtual view. However, as it generates a degraded view such as blurred images, we prevent a synthesized view from being blurred by using more cameras in multiview camera configuration. That is, we solve this by applying super-resolution concept which reconstructs a high resolution image from several low resolution images. Therefore, data fusion is executed by geometric warping using a disparity of the multiple images followed by deblur operation. Experimental results show that the image quality can further be improved by reducing blur in comparison with interpolation method.

  • PDF

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.

Real-time Virtual View Synthesis using Virtual Viewpoint Disparity Estimation and Convergence Check (가상 변이맵 탐색과 수렴 조건 판단을 이용한 실시간 가상시점 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.1A
    • /
    • pp.57-63
    • /
    • 2012
  • In this paper, we propose a real-time view interpolation method using virtual viewpoint disparity estimation and convergence check. For the real-time process, we estimate a disparity map at the virtual viewpoint from stereo images using the belief propagation method. This method needs only one disparity map, compared to the conventional methods that need two disparity maps. In the view synthesis part, we warp pixels from the reference images to the virtual viewpoint image using the disparity map at the virtual viewpoint. For real-time acceleration, we utilize a high speed GPU parallel programming, called CUDA. As a result, we can interpolate virtual viewpoint images in real-time.

RAY-SPACE INTERPOLATION BYWARPING DISPARITY MAPS

  • Moriy, Yuji;Yendoy, Tomohiro;Tanimotoy, Masayuki;Fujiiz, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.583-587
    • /
    • 2009
  • In this paper we propose a new method of Depth-Image-Based Rendering (DIBR) for Free-viewpoint TV (FTV). In the proposed method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by reconstructing disparity information at virtual camera position from neighboring two real cameras. In the experiments, high quality arbitrary viewpoint images were obtained.

  • PDF

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Semiautomatic 3D Virtual Fish Modeling based on 2D Texture

  • Nakajima, Masayuki;Hagiwara, Hisaya;Kong, Wai-Ming;Takahashi, Hiroki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.18-21
    • /
    • 1996
  • In the field of Virtual Reality, many studies have been reported. Especially, there are many studies on generating virtual creatures on computer systems. In this paper we propose an algorithm to automatically generate 3D fish models from 2D images which are printed in illustrated books, pictures or handwritings. At first, 2D fish images are captured by means of image scanner. Next, the fish image is separated from background and segmented to several parts such as body, anal fin, dorsal fin, ectoral fin and ventral fin using the proposed method“Active Balloon model”. After that, users choose front view model and top view model among six samples, respectively. 3D model is automatically generated from separated body, fins and the above two view models. The number of patches is decreased without any influence on the accuracy of the generated 3D model to reduce the time cost when texture mapping is applied. Finally, we can get any kinds of 3D fish models.

  • PDF

Virtual pencil and airbrush rendering algorithm using particle patch (입자 패치 기반 가상 연필 및 에어브러시 가시화 알고리즘)

  • Lee, Hye Rin;Oh, Geon;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.101-109
    • /
    • 2018
  • Recently, the improvement of virtual reality and augmented reality technologies leverages many new technologies like the virtual study room, virtual architecture room. Such virtual worlds require free handed drawing technology such as writing descriptions of formula or drawing blue print of buildings. In nature, lots of view point modifications occur when we walk around inside the virtual world. Especially, we often look some objects from near to far distance in the virtual world. Traditional drawing methods like using fixed size image for drawing unit is not produce acceptable result because they generate blurred and jaggy result as view distance varying. We propose a novel method which robust to the environment that produce lots of magnifications and minimizations like the virtual reality world. We implemented our algorithm both two dimensional and three dimensional devices. Our algorithm does not produce any artifacts, jaggy or blurred result regardless of scaling factor.

The Interactive Modeling Method of Virtual City Scene Based on Building Codes

  • Ding, Wei-long;Zhu, Xiao-jie;Xu, Bin;Xu, Yan;Chen, Kai;Wan, Zang-xin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.74-89
    • /
    • 2021
  • For higher-level requirements of urban planning and management and the recent development of "digital earth" and "digital city", it is urgent to establish protocols for the construction of three-dimensional digital city models. However, some problems still exist in the digital technology of the three-dimensional city model, such as insufficient precision of the three-dimensional model, not optimizing the scene and not considering the constraints of building codes. In view of those points, a method to interactively simulate a virtual city scene based on building codes is proposed in this paper. Firstly, some constraint functions are set up to restrict the models to adhere to the building codes, and an improved directional bounding box technique is utilized to solve the problem that geometric objects may intersect in a virtual city scene. The three-dimensional model invocation strategy is designed to convert two-dimensional layouts to a three-dimensional urban scene. A Leap Motion hardware device is used to interactively place the 3D models in a virtual scene. Finally, the design and construction of the three-dimensional scene are completed by using Unity3D. The experiment shows that this method can simulate urban virtual scenes that strictly adhere to building codes in a virtual scene of the city environment, but also provide information and decision-making functions for urban planning and management.

Confidence Map based Multi-view Image Generation Method from Stereoscopic Images (양안식 영상을 이용한 신뢰도 기반의 다시점 영상 생성 방법)

  • Kim, Do Young;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.27-33
    • /
    • 2013
  • Multi-view video system provides both realistic 3D feelings and free-view navigation. But it is hard to transmit too huge data, so we send only two or three view images and generate intermediate view image using depth information. In this paper, we propose high quality multi-view image generation method from stereoscopic images. Since the stereo matching method does not provide accurate disparity values for all the pixels, especially at the occlusion area, we propose an occlusion handling method using the background pixels at first. We also apply a joint bilateral filtering to enhance the disparity map at the object boundary since it can affect the quality of synthesized images significantly. Finally, we can generate virtual view images at intermediate view positions using confidence map to reduce bad pixel and hole's error. Experimental results show the proposed method performs better than the conventional method.

  • PDF

Development of a Real-time Sensor-based Virtual Imaging System (센서기반 실시간 가상이미징 시스템의 구현)

  • 남승진;오주현;박성춘
    • Journal of Broadcast Engineering
    • /
    • v.8 no.1
    • /
    • pp.63-71
    • /
    • 2003
  • In sport programs, real-time virtual imaging system come into notice for new technology which can compose information like team logos, scores. distances directly on playing ground, so it can compensate for the defects of general character generator. In order to synchronize graphics to camera movements, generally two method is used. One is for using sensors attached to camera moving axis and the other is for analyzing camera video itself. KBS technical research institute developed real-time sensor-based virtual imaging system 'VIVA', which uses four sensors on pan, tilt, zoom, focus axis and controls virtual graphic camera in three dimensional coordinates in real-time. In this paper, we introduce our system 'VIVA' and it's technology. For accurate camera tracking we calculated view-point movement occurred by zooming based on optical principal point variation data and we considered field of view variation not only by zoom but also by focus. We developed our system based on three dimensional graphic environment. so many useful three dimensional graphic techniques such as keyframe animation can be used. VIVA was successfully used both in Busan Asian Games and 2002 presidential election. We confirmed that it can be used not only in the field but also in the studio programs in which camera is used within more close range.