• Title/Summary/Keyword: Virtual view

Search Result 586, Processing Time 0.037 seconds

Disparity Refinement near the Object Boundaries for Virtual-View Quality Enhancement

  • Lee, Gyu-cheol;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.5
    • /
    • pp.2189-2196
    • /
    • 2015
  • Stereo matching algorithm is usually used to obtain a disparity map from a pair of images. However, the disparity map obtained by using stereo matching contains lots of noise and error regions. In this paper, we propose a virtual-view synthesis algorithm using disparity refinement in order to improve the quality of the synthesized image. First, the error region is detected by examining the consistency of the disparity maps. Then, motion information is acquired by applying optical flow to texture component of the image in order to improve the performance. Then, the occlusion region is found using optical flow on the texture component of the image in order to improve the performance of the optical flow. The refined disparity map is finally used for the synthesis of the virtual view image. The experimental results show that the proposed algorithm improves the quality of the generated virtual-view.

Virtual View Generation by a New Hole Filling Algorithm

  • Ko, Min Soo;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.3
    • /
    • pp.1023-1033
    • /
    • 2014
  • In this paper, performance improved hole-filling algorithm which includes the boundary noise removing pre-process that can be used for an arbitrary virtual view synthesis has been proposed. Boundary noise occurs due to the boundary mismatch between depth and texture images during the 3D warping process and it usually causes unusual defects in a generated virtual view. Common-hole is impossible to recover by using only a given original view as a reference and most of the conventional algorithms generate unnatural views that include constrained parts of the texture. To remove the boundary noise, we first find occlusion regions and expand these regions to the common-hole region in the synthesized view. Then, we fill the common-hole using the spiral weighted average algorithm and the gradient searching algorithm. The spiral weighted average algorithm keeps the boundary of each object well by using depth information and the gradient searching algorithm preserves the details. We tried to combine strong points of both the spiral weighted average algorithm and the gradient searching algorithm. We also tried to reduce the flickering defect that exists around the filled common-hole region by using a probability mask. The experimental results show that the proposed algorithm performs much better than the conventional algorithms.

Interactive 3D-View Image Service on Web and Mobile Phone (웹 및 모바일 폰에서의 인터랙티브 3D-View 이미지 서비스 기술)

  • Jeon, Kyeong-Won;Kwon, Yong-Moo;Jo, Sang-Woo;Ki, Jeong-Seok
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.518-523
    • /
    • 2007
  • This paper presents web service and service on mobile phone about research on virtual URS(Ubiquitous Robotic Space). We modeled the URS. Then, we find the location of robot in the virtual URS on web and mobile phone. We control the robot view with mobile phone. This paper addresses the concept of virtual URS and introduces interaction between robot in the virtual URS and human using web and mobile phone service. Then, this paper introduces a case of service on mobile phone.

  • PDF

Fast Measurement of Eyebox and Field of View (FOV) of Virtual and Augmented Reality Devices Using the Ray Trajectories Extending from Positions on Virtual Image

  • Hong, Hyungki
    • Current Optics and Photonics
    • /
    • v.4 no.4
    • /
    • pp.336-344
    • /
    • 2020
  • Exact optical characterization of virtual and augmented reality devices using conventional luminance measuring methods is a time-consuming process. A new measurement method is proposed to estimate in a relatively short time the boundary of ray trajectories emitting from a specific position on a virtual images. It is assumed that the virtual image can be modeled to be formed in front of one's eyes and seen through some optical aperture (field stop) that limits the field of view. Circular and rectangular shaped virtual images were investigated. From the estimated ray boundary, optical characteristics, such as the viewing direction and three dimensional range inside which a eye can observe the specified positions of the virtual image, were derived. The proposed method can provide useful data for avoiding the unnecessary measurements required for the previously reported method. Therefore, this method can be complementary to the previously reported method for reducing the whole measurement time of optical characteristics.

Virtual Walking Tour System (가상 도보 여행 시스템)

  • Kim, Han-Seob;Lee, Jieun
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.605-613
    • /
    • 2018
  • In this paper, we propose a system to walk around the world with virtual reality technology. Although the virtual reality users are interested in the virtual travel contents, the conventional virtual travel contents have limited space for experiencing and lack of interactivity. In order to solve the problem of lack of realism and limited space, which is a disadvantage of existing contents, this system created a virtual space using Google Street View image. Users can have realistic experience with real street images, and travel a vast area of the world provided by Google Street View image. Also, a virtual reality headset and a treadmill equipment are used so that the user can actually walk in the virtual space, which maxmizes user interactivity and immersion. We expect this system contributes to the leisure activities of virtual reality users by allowing natural walking trip from famous tourist spots to even mountain roads and alleys.

WALK-THROUGH VIEW FOR FTV WITH CIRCULAR CAMERA SETUP

  • Uemori, Takeshi;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.727-731
    • /
    • 2009
  • In this paper, we propose a method to generate a free viewpoint image using multi-viewpoint images which are taken by cameras arranged circularly. In past times, we have proposed the method to generate a free viewpoint image based on Ray-Space method. However, with that method, we can not generate a walk-through view seen from a virtual viewpoint among objects. The method we propose in this paper realizes the generation of such view. Our method gets information of the positions of objects using shape from silhouette method at first, and selects appropriate cameras which acquired rays needed for generating a virtual image. A free viewpoint image can be generated by collecting rays which pass over the focal point of a virtual camera. However, when the requested ray is not available, it is necessary to interpolate it from neighboring rays. Therefore, we estimate the depth of the objects from a virtual camera and interpolate ray information to generate the image. In the experiments with the virtual sequences which were captured at every 6 degrees, we set the virtual camera at user's choice and generated the image from that viewpoint successfully.

  • PDF

A Study on 3D View Design of Images and Voices Integration for Effective Information Transfer (효과적 정보전달을 위한 영상정보의 3D 뷰 및 음성정보와의 융합 연구)

  • Shin, C.H.;Lee, J.S.
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.1B
    • /
    • pp.35-41
    • /
    • 2010
  • In this paper, we propose a 3D view design scheme which arranges 2D information in a 3D virtual space with a flexible interface and voice information. The scheme allows the user interface of the 2D image in 3D virtual space anytime from any view point. Voice information can be easily attached. It is this simple and efficient image and voice information arrangement in 3D virtual space that improves information transfer.

Virtual Viewpoint Image Synthesis Algorithm using Multi-view Geometry (다시점 카메라 모델의 기하학적 특성을 이용한 가상시점 영상 생성 기법)

  • Kim, Tae-June;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1154-1166
    • /
    • 2009
  • In this paper, we propose algorithms for generating high quality virtual intermediate views on the baseline or out of baseline. In this proposed algorithm, depth information as well as 3D warping technique is used to generate the virtual views. The coordinate of real 3D image is calculated from the depth information and geometrical characteristics of camera and the calculated 3D coordinate is projected to the 2D plane at arbitrary camera position and results in 2D virtual view image. Through the experiments, we could show that the generated virtual view image on the baseline by the proposed algorithm has better PSNR at least by 0.5dB and we also could cover the occluded regions more efficiently for the generated virtual view image out of baseline by the proposed algorithm.

Real-time Virtual View Synthesis using Virtual Viewpoint Disparity Estimation and Convergence Check (가상 변이맵 탐색과 수렴 조건 판단을 이용한 실시간 가상시점 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.1A
    • /
    • pp.57-63
    • /
    • 2012
  • In this paper, we propose a real-time view interpolation method using virtual viewpoint disparity estimation and convergence check. For the real-time process, we estimate a disparity map at the virtual viewpoint from stereo images using the belief propagation method. This method needs only one disparity map, compared to the conventional methods that need two disparity maps. In the view synthesis part, we warp pixels from the reference images to the virtual viewpoint image using the disparity map at the virtual viewpoint. For real-time acceleration, we utilize a high speed GPU parallel programming, called CUDA. As a result, we can interpolate virtual viewpoint images in real-time.

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.