• Title/Summary/Keyword: Image-Based Rendering

Search Result 325, Processing Time 0.024 seconds

Development of an Interactive Virtual Reality Service based on 360 degree VR Image (360도 파노라마 영상 기반 대화형 가상현실 서비스 구축)

  • Kang, Byoung-Gil;Ryu, Seuc-Ho;Lee, Wan-Bok
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.463-470
    • /
    • 2017
  • Currently, virtual reality contents using VR images are spotlighted since they can be easily created and utilized. But because VR images are in a state of lack of interaction, there are limitations in their applications and usability.In order to overcome this problem, we propose a new method in which 360 degree panorama image and game engine are utilized to develop a high resolution of interactive VR service in real time. In particular, since the background image, which is represented by a form of panorama image, is pre-generated through a heavy rendering computation, it can be used to provide a immersive VR service with a relatively small amount of computation in run time on a low performance device. In order to show the effectiveness of our proposed method, an interactive game of a virtual zoo environment was implemented and illustrated showing that it can improve user interaction and immersion experience in a pretty good way.

Development of the integrated management simulation system for the target correction (표적 수정이 가능한 사용자 개입 통합 관리 모의 시스템 개발)

  • Park, Woosung;Oh, TaeWon;Park, TaeHyun;Lee, YongWon;Kim, Kibum;Kwon, Kijeong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.7
    • /
    • pp.600-609
    • /
    • 2017
  • We designed a target management integration system that enables us to select the final target manually or automatically from seeker's sensor image. The integrated system was developed separately for the air vehicle system and the ground system. The air vehicle system simulates the motion dynamics and the sensor image of the air vehicle, and the ground system is composed of the target template image module and the ground control center module. The flight maneuver of the air vehicle is based on pseudo 6-degree of freedom motion equation and the proportional navigation guidance. The sensor image module was developed using the known infrared(IR) image rendering method, and was verified by comparing the rendered image to that of a commercial software. The ground control center module includes an user interface that can display as much information to meet user needs. Finally, we verified the integrated system with simulated impact target mission of the air vehicle, by confirming the final target change and the shot down result of the user's intervention.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Enhanced High Contrast Image Rendering Method Using Visual Properties for Sharpness Perception (시각 선명도 감각 특성을 이용한 개선된 고명암 대비 영상 렌더링 기법)

  • Lee, Geun-Young;Lee, Sung-Hak;Kwon, Hyuk-Ju;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.8
    • /
    • pp.669-679
    • /
    • 2013
  • When an image is converted from HDR (high dynamic range) to LDR (low dynamic range), a tone mapping process is the essential component. Many TMOs (tone mapping operators) have been motivated by human vision which has lower physical luminance range than that in real scene. The representative of human vision properties which motivate TMOs is the local adaptation. However, TMOs are ultimately compressing image information such as contrast, saturation, etc. and the compression causes defects in image quality. In this paper, in order to compensate the degradation of the image which is caused by TMOs, the visual acuity-based edge stop function is proposed for applying the property of human vision to base-detail separation. In addition, using CSF (contrast sensitivity function) which represents the relationship among spatial frequency, contrast sensitivity, and luminance, the sharpness filter is designed and adaptively applied to the detail layer in regard to surround luminance.

Neural Relighting using Specular Highlight Map (반사 하이라이트 맵을 이용한 뉴럴 재조명)

  • Lee, Yeonkyeong;Go, Hyunsung;Lee, Jinwoo;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.87-97
    • /
    • 2020
  • In this paper, we propose a novel neural relighting that infers a relighted rendering image based on the user-guided specular highlight map. The proposed network utilizes a pre-trained neural renderer as a backbone network learned from the rendered image of a 3D scene with various lighting conditions. We jointly optimize a 3D light position and its associated relighted image by back-propagation, so that the difference between the base image and the relighted image is similar to the user-guided specular highlight map. The proposed method has the advantage of being able to explicitly infer the 3D lighting position, while providing the artists' preferred 2D screen-space interface. The performance of the proposed network was measured under the conditions that can establish ground truths, and the average error rate of light position estimations is 0.11, with the normalized 3D scene size.

3D Modeling of Lacus Mortis Pit Crater with Presumed Interior Tube Structure

  • Hong, Ik-Seon;Yi, Yu;Yu, Jaehyung;Haruyama, Junichi
    • Journal of Astronomy and Space Sciences
    • /
    • v.32 no.2
    • /
    • pp.113-120
    • /
    • 2015
  • When humans explore the Moon, lunar caves will be an ideal base to provide a shelter from the hazards of radiation, meteorite impact, and extreme diurnal temperature differences. In order to ascertain the existence of caves on the Moon, it is best to visit the Moon in person. The Google Lunar X Prize(GLXP) competition started recently to attempt lunar exploration missions. Ones of those groups competing, plan to land on a pit of Lacus Mortis and determine the existence of a cave inside this pit. In this pit, there is a ramp from the entrance down to the inside of the pit, which enables a rover to approach the inner region of the pit. In this study, under the assumption of the existence of a cave in this pit, a 3D model was developed based on the optical image data. Since this model simulates the actual terrain, the rendering of the model agrees well with the image data. Furthermore, the 3D printing of this model will enable more rigorous investigations and also could be used to publicize lunar exploration missions with ease.

Arbitrary View Images Generation Using Panoramic-Based Image Morphing For Large-Scale Scenes (대규모 환경에서 파노라믹 기반 영상 모핑을 이용한 임의 시점의 영상 생성)

  • Jeong, Jang-Hyun;Joo, Myung-Ho;Kang, Hang-Bong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.185-188
    • /
    • 2005
  • 영상 기반 렌더링에서 평면에 투영된 사영 영상만을 가지고 3 차원 영상을 재구성 하는 여러 가지 모델링 기법들이 연구되어 왔다. 4D Plenoptic Function 을 사용하는 Light Field Rendering 이나 Lumigraph 방법은 여러 개의 영상으로 새로운 시점의 영상을 생성하는 기법이다. 이러한 방법은 사용자가 가상 세계에서의 항해를 가능하게 하고 2 차원의 정보만으로 3 차원 환경을 구성 할 수 있다. Concentric Mosaic, Plenoptic Stitching, Sea of Image 등은 Light Field 를 이용하여 사용자가 여러 가지 환경에서 항해할 수 있게 하는 기법이다. 특히 Takahashi 는 도시의 거리와 같은 대규모 환경에서의 항해에 관한 연구를 발표하였다. 단일 경로를 따라 파노라마 영상을 획득한 다음 Light Field 방법을 사용해서 새로운 시점의 영상을 생성한다. 하지만 대규모 환경에서 사용자가 이동할 수 있는 경로의 범위는 매우 넓고 경로를 따라 조밀하게 파노라마 영상을 획득해야 하기 때문에 데이터의 양이 많아지고 영상획득의 어려움이 있다. 이러한 단점으로 인하여 참조 영상의 네트워크 전송 시에 네트워크의 부하가 증가될 수 있다. 본 논문에서는 Takahashi 의 방법을 기본으로 파노라마 영상 모핑 방법을 이용하여 임의 시점 (Arbitrary View)의 영상을 렌더링 하는 방법을 제안한다. 파노라마 영상의 획득 간격을 비교적 크게 하면서 파노라마 영상 모핑 기법을 이용하여 중간 영상을 생성한 후 Takahashi 의 방법을 사용하여 임의 영상을 생성하는 방법이다. 적은 수의 파노라마 영상으로 비교적 좋은 임의 시점의 영상을 재구성 할 수 있었다.

  • PDF

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

The Improvement of Meshwarp Algorithm for Rotational Pose Transformation of a Front Facial Image (정면 얼굴 영상의 회전 포즈 변형을 위한 메쉬워프 알고리즘의 개선)

  • Kim, Young-Won;Phan, Hung The;Oh, Seung-Taek;Jun, Byung-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.425-428
    • /
    • 2002
  • 본 논문에서는 한 장의 정면 얼굴 영상만으로 회전 변형을 수행할 수 있는 새로운 영상기반렌더링(Image Based Rendering, IBR) 기법을 제안한다. 3차원 기하학적 모델을 대신하면서 수평 회전 변형을 연출하기 위해, 특정 인물의 정면, 좌우 반측면, 좌우 측면의 얼굴 영상에 대한 표준 메쉬 집합을 작성한다. 변형하고자 하는 임의의 인물에 대해서는 정면 영상에 대한 메쉬만을 작성하고, 나머지 측면 참조 메쉬들은 표준 메쉬 집합에 의해 자동으로 생성된다. 입체적인 회전 효과를 연출하기 위해, 회전 변형시 발생할 수 있는 제어점들간의 중첩 및 역전을 허용하도록 기존의 두 단계 메쉬워프 알고리즘을 개선한 역전가능 메쉬워프 알고리즘(invertible meshwarp algorithm)을 제안한다. 이 알고리즘을 이용하여 다양한 남녀노소의 정면 얼굴 영상에 대해 회전에 따른 포즈 변형을 수행하여 비교적 자연스러운 포즈 변형 결과를 얻었다.

  • PDF

Video-based Stained Glass

  • Kang, Dongwann;Lee, Taemin;Shin, Yong-Hyeon;Seo, Sanghyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2345-2358
    • /
    • 2022
  • This paper presents a method to generate stained-glass animation from video inputs. The method initially segments an input video volume into several regions considered as fragments of glass by mean-shift segmentation. However, the segmentation predominantly results in over-segmentation, causing several tiny segments in a highly textured area. In practice, assembling significantly tiny or large glass fragments is avoided to ensure architectural stability in stained glass manufacturing. Therefore, we use low-frequency components in the segmentation to prevent over-segmentation and subdivide segmented regions that are oversized. The subdividing must be coherent between adjacent frames to prevent temporal artefacts, such as flickering and the shower door effect. To temporally subdivide regions coherently, we obtain a panoramic image from the segmented regions in input frames, subdivide it using a weighted Voronoi diagram, and thereafter project the subdivided regions onto the input frames. To render stained glass fragment for each coherent region, we determine the optimal match glass fragment for the region from a dataset consisting of real stained-glass fragment images and transfer its color and texture to the region. Finally, applying lead came at the boundary of the regions in each frame yields temporally coherent stained-glass animation.