• Title/Summary/Keyword: VR 이미지

Search Result 83, Processing Time 0.029 seconds

Development of 4D System Linking AR and 3D Printing Objects for Construction Porject (AR과 3D 프린팅 객체를 연계한 건설공사 4D 시스템 구성 연구)

  • Park, Sang Mi;Kim, Hyeon Seung;Moon, Hyoun Seok;Kang, Leen Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.2
    • /
    • pp.181-189
    • /
    • 2021
  • In order to increase the practical usability of the virtual reality(VR)-based BIM object in the construction site, the difference between the virtual image and the real image should be resolved, and when it is applied to the construction schedule management function, it is necessary to reduce the image gap between the virtual completion and the actual completion. In this study, in order to solve this problem, a prototype of 4D model is developed in which augmented reality (AR) and 3D printing technologies are linked, and the practical usability of a 4D model linked with two technologies is verified. When a schedule simulation is implemented by combining a three-dimensional output and an AR object, it is possible to provide more intuitive information as a tangible image-based schedule information when compared to a simple VR-based 4D model. In this study, a methodology and system development of an AR implementation system in which subsequent activities are simulated in 4D model using markers on 3D printing outputs are attempted.

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

A proposed image stitching method for web-based panoramic virtual reality for Hoseo Cyber Museum (호서 사이버 박물관: 웹기반의 파노라마 비디오 가상현실에 대한 효율적인 이미지 스티칭 알고리즘)

  • Khan, Irfan;Soo, Hong Song
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.2
    • /
    • pp.893-898
    • /
    • 2013
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient method for Image registration and stitching of captured imaged. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is used for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

A Study on the Photorealism of Digital Architectural Rendering Images (디지털 건축 렌더링 이미지의 포토리얼리즘에 대한 고찰)

  • Kim, Jong Konk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.238-246
    • /
    • 2018
  • The production of hyper-realistic digital rendering images has been available due to radical improvements of recent digital rendering and CGI (Computer-Generated Imagery) software technologies. The photorealism of digital architectural rendering images requires further studies and discussions in that architectural visualization becomes a foundation of other fields using digital rendering technology, such as movies, games, and VR industry. The principles for achieving photorealism on digital architectural rendering images were re-defined and detailed elements were analyzed through theoretical analysis of the former studies. Four principles were drawn from the architectural rendering images produced by newly-developed technologies: physically-accurate lighting calculations, accurate object geometry representation, realistic material and texture, and characteristics of photography. The sub-elements of those four principles are categorized into either essential or selective for photorealistic imagery and the randomness of the selective elements could explain the variety of photorealistic architectural rendering styles.

From Broken Visions to Expanded Abstractions (망가진 시선으로부터 확장된 추상까지)

  • Hattler, Max
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.697-712
    • /
    • 2017
  • In recent years, film and animation for cinematic release have embraced stereoscopic vision and the three-dimensional depth it creates for the viewer. The maturation of consumer-level virtual reality (VR) technology simultaneously spurred a wave of media productions set within 3D space, ranging from computer games to pornographic videos, to Academy Award-nominated animated VR short film Pearl. All of these works rely on stereoscopic fusion through stereopsis, that is, the perception of depth produced by the brain from left and right images with the amount of binocular parallax that corresponds to our eyes. They aim to emulate normal human vision. Within more experimental practices however, a fully rendered 3D space might not always be desirable. In my own abstract animation work, I tend to favour 2D flatness and the relative obfuscation of spatial relations it affords, as this underlines the visual abstraction I am pursuing. Not being able to immediately understand what is in front and what is behind can strengthen the desired effects. In 2015, Jeffrey Shaw challenged me to create a stereoscopic work for Animamix Biennale 2015-16, which he co-curated. This prompted me to question how stereoscopy, rather than hyper-defining space within three dimensions, might itself be used to achieve a confusion of spatial perception. And in turn, how abstract and experimental moving image practices can benefit from stereoscopy to open up new visual and narrative opportunities, if used in ways that break with, or go beyond stereoscopic fusion. Noteworthy works which exemplify a range of non-traditional, expanded approaches to binocular vision will be discussed below, followed by a brief introduction of the stereoscopic animation loop III=III which I created for Animamix Biennale. The techniques employed in these works might serve as a toolkit for artists interested in exploring a more experimental, expanded engagement with stereoscopy.

VR, AR Simulation and 3D Printing for Shoulder and Elbow Practice (VR, AR 시뮬레이션 및 3D Printing을 활용한 어깨와 팔꿈치 수술실습)

  • Lim, Wonbong;Moon, Young Lae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.175-179
    • /
    • 2016
  • Recent advances in technology of medical image have made surgical simulation that is helpful to diagnosis, operation plan, or education. Improving and enhancing the medical imaging have led to the availability of high definition images and three-dimensional (3D) visualization, it allows a better understanding in the surgical and educational field. The Real human field of view is stereoscopic. Therefore, with just 2D images, stereoscopic reconstruction process through the surgeon's head, is necessary. To reduce these process, 3D images have been used. 3D images enhanced 3D visualization, it provides significantly shorter time for surgeon for judgment in complex situations. Based on 3D image data set, virtual medical simulations, such as virtual endoscopy, surgical planning, and real-time interaction, have become possible. This article describes principles and recent applications of newer imaging techniques and special attention is directed towards medical 3D reconstruction techniques. Recent advances in technology of CT, MR and other imaging modalities has resulted in exciting new solutions and possibilities of shoulder imaging. Especially, three-dimensional (3D) images derived from medical devices provides advanced information. This presentation describes the principles and potential applications of 3D imaging techniques, simulation and printing in shoulder and elbow practice.

The Study for Securing Reproducibility of Experimental Method in Papers of Spatial Image Evaluation - Focusing on the Papers of Domestic Journals - (공간 이미지 평가 연구에서 실험방식의 재현성 확보 방안 - 국내 학회지 게재 논문을 중심으로 -)

  • Mun, Jae-Eun;Kim, Jong-Ha
    • Korean Institute of Interior Design Journal
    • /
    • v.26 no.4
    • /
    • pp.93-102
    • /
    • 2017
  • The study of space through image evaluation has reached the stage of measuring the emotion with the development of IT technology and the advent of VR AR era. Many researches have been trying to evaluate the space and measure the emotion in objective form by providing various forms of spatial images such as photographs, sketches, and CG. In order for these studies to be used as objective data with logical relevance, it is necessary to describe in detail the method of collecting the data used in the experiment, the characteristics of the test procedure, and the method of analysis. This study is a basic study for constructing a database by systematically organizing the attributes, experimental methods and experimental procedures of subjects. From the viewpoint of securing the objectivity and reproducibility of the paper, we analyzed the spatial image evaluation process focusing on 1) evaluation subject, 2) experimental method, and 3) analysis standard. It is necessary to examine whether the object of evaluation and the form of image and the method of providing it meet the purpose of the study. In addition, the size and order of the image, the viewing time and the interval (break time) should be different according to the gender and group experiment and the individual test method.

IFX : FEM/CFD visualization system for Desktop-Immersive environment collaborative work (IFX : 데스크탑 - 몰입 환경 간 협업을 위한 FEM/CFD 가시화 시스템)

  • Yun, Hyun-Joo;Wundrak, Stefan;Jo, Hyun-Jei
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.661-666
    • /
    • 2007
  • 최근들어 제품을 개발하는 과정 중, 디자이너와 개발자, 의사 결정권자들이 FEM, CFD 시뮬레이션 결과를 리뷰할 때에 가상현실기술을 도입하는 사례가 늘고 있다. 몰입감을 높여주는 가상현실환경은 모델에 대한 해석 결과물을 정확하고 효과적으로 분석할 수 있도록 돕는다. 데이터의 실제 크기와 같게 혹은 그보다 더 크고 자세한 이미지를 제공하는 가상현실 몰입환경은 사용자가 데스크탑 환경만을 사용할 때 경험할 수 없는 높은 사실감을 제공함으로써 사용자에게 시각적인 만족감을 줄 수 있다. 하지만 데스크탑 환경에 비해 해상도가 낮고, 어두운 곳에서 스테레오 안경이나 HMD(Head Mounted Display), Data glove등을 착용해야 하는 불편함과 멀미, 시각적인 피로, 방향감각 상실로 대표되는 가상멀미 등으로 인해 장시간 사용에 어려움이 있다. 데스트탑 환경에서의 데이터 리뷰는 고해상도 이미지 분석은 가능하지만, 입체감이 떨어지기 때문에 리뷰 데이터의 실제감이 떨어진다. 이와 같은 문제점들을 보완하기 위해서 본 논문에서는 데스크탑 환경과 가상현실 환경 간의 협업이 가능한 FEM/CFD 가시화 시스템을 제시한다. 본 시스템은 가상현실 몰입환경에서 해석 데이터를 단순히 가시화하는 것뿐만이 아니라, 데스크탑 시스템과 동일한 3D 인터페이스 구조를 제공한다. 따라서, 해석 결과 분석을 위한 동일한 post-processing 작업이 네트워크로 연결된 원격 공간의 사용자들이 사용하는 시스템들 사이에서 실시간으로 진행될 수 있다.

  • PDF

Omnidirectional Environmental Projection Mapping with Single Projector and Single Spherical Mirror (단일 프로젝터와 구형 거울을 활용한 전 방향프로젝션 시스템)

  • Kim, Bumki;Lee, Jungjin;Kim, Younghui;Jeong, Seunghwa;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • Researchers have developed virtual reality environments to provide audience with more visually immersive experiences than previously possible. One of the most popular solutions to build the immersive VR space is a multi-projection technique. However, utilization of multiple projectors requires large spaces, expensive cost, and accurate geometry calibration among projectors. This paper presents a novel omnidirectional projection system with a single projector and a single spherical mirror.We newly designed the simple and intuitive calibration system to define the shape of environment and the relative position of mirror/projector. For successful image projection, our optimized omnidirectional image generation step solves image distortion produced by the spherical mirror and a calibration problem produced by unknown parameters such as the shape of environment and the relative position between the mirror and the projector. Additionally, the focus correction is performed to improve the quality of the projection. The experiment results show that our method can generate the optimized image given a normal panoramic image for omnidirectional projection in a rectangular space.