• Title/Summary/Keyword: VR images

Search Result 135, Processing Time 0.023 seconds

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

VR Visualization of Casting Flow Simulation (주물 유동해석의 VR 가시화)

  • Park, Ji-Young;Suh, Ji-Hyun;Kim, Sung-Hee;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.813-816
    • /
    • 2008
  • In this research we present a method to reconstruct the casting flow simulation result as a 3D model and visualize it on a VR display. First, numerical analysis of heat flow is performed using an existing commercial CAE simulation software. In this process the shape of the original design model is approximated to a regular rectangular grid. The filling ratio and temperature of each voxel are recorded iteratively by predefined number of steps starting from pouring the melted metal into a mold until it is entirely filled. Next we reconstruct the casting by voxels using the simulation result as an input. The color of voxel is determined by mapping the colors to temperature and filling ratio at each step as the flow proceeds. The reconstructed model is visualized on the Projection Table which is one of horizontal-type VR display. It provides active stereoscopic images.

  • PDF

The Influence of VR Color Image for Color Psychotherapy (색채심리치료를 위한 VR색채영상의 영향)

  • Hong, Gee Yun;Lee, OnSeok
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.10
    • /
    • pp.376-384
    • /
    • 2017
  • Color therapy is a diagnostic method for treating psychological problems using unique vibration and frequency color. The color has strong subjective psychology, it can give psychological stabilization effect as a spring as a favorite color and stability become personally personal. Also, to give various influences on emotion, we can obtain information via brain wave to evaluate these emotions. In this research, before viewing two kinds of VR color images of red and blue, responding to the stress questionnaire(BEPSI-K) via a mobile terminal. At this time, we analyzed the real-time we used four emotional data measurable via brain waves and grasped the user's emotional state. In addition, the stress questionnaire was used to quantify the emotion degree, and comparative analysis was made on how the color image affects the change of emotions with 3D virtual reality.

Full-field Distortion Measurement of Virtual-reality Devices Using Camera Calibration and Probe Rotation (카메라 교정 및 측정부 회전을 이용한 가상현실 기기의 전역 왜곡 측정법)

  • Yang, Dong-Geun;Kang, Pilseong;Ghim, Young-Sik
    • Korean Journal of Optics and Photonics
    • /
    • v.30 no.6
    • /
    • pp.237-242
    • /
    • 2019
  • A compact virtual-reality (VR) device with wider field of view provides users with a more realistic experience and comfortable fit, but VR lens distortion is inevitable, and the amount of distortion must be measured for correction. In this paper, we propose two different full-field distortion-measurement methods, considering the characteristics of the VR device. The first is the distortion-measurement method using multiple images based on camera calibration, which is a well-known technique for the correction of camera-lens distortion. The other is the distortion-measurement method by measuring lens distortion at multiple measurement points by rotating a camera. Our proposed methods are verified by measuring the lens distortion of Google Cardboard, as a representative sample of a commercial VR device, and comparing our measurement results to a simulation using the nominal values.

MPEG-DASH based 3D Point Cloud Content Configuration Method (MPEG-DASH 기반 3차원 포인트 클라우드 콘텐츠 구성 방안)

  • Kim, Doohwan;Im, Jiheon;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.660-669
    • /
    • 2019
  • Recently, with the development of three-dimensional scanning devices and multi-dimensional array cameras, research is continuously conducted on techniques for handling three-dimensional data in application fields such as AR (Augmented Reality) / VR (Virtual Reality) and autonomous traveling. In particular, in the AR / VR field, content that expresses 3D video as point data has appeared, but this requires a larger amount of data than conventional 2D images. Therefore, in order to serve 3D point cloud content to users, various technological developments such as highly efficient encoding / decoding and storage, transfer, etc. are required. In this paper, V-PCC bit stream created using V-PCC encoder proposed in MPEG-I (MPEG-Immersive) V-PCC (Video based Point Cloud Compression) group, It is defined by the MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard, and provides to be composed of segments. Also, in order to provide the user with the information of the 3D coordinate system, the depth information parameter of the signaling message is additionally defined. Then, we design a verification platform to verify the technology proposed in this paper, and confirm it in terms of the algorithm of the proposed technology.

Object VR-based 2.5D Virtual Textile Wearing System : Viewpoint Vector Estimation and Textile Texture Mapping (오브젝트 VR 기반 2.5D 가상 직물 착의 시스템 : 시점 벡터 추정 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.19-26
    • /
    • 2008
  • This paper is related to a new technology allowing a user to have a 360 degree viewpoint of the virtual wearing object, and to an object VR(Virtual Reality)-based 2D virtual textile wearing system using viewpoint vector estimation and textile texture mapping. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multiview 2D images of clothes model for object VR, and three-dimensionally viewing its virtual wearing appearance at a 360 degree viewpoint of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi -automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

  • PDF

Study on the Emotional Response of VR Contents Based on Photorealism: Focusing on 360 Product Image (실사 기반 VR 콘텐츠의 감성 반응 연구: 360 제품 이미지를 중심으로)

  • Sim, Hyun-Jun;Noh, Yeon-Sook
    • Science of Emotion and Sensibility
    • /
    • v.23 no.2
    • /
    • pp.75-88
    • /
    • 2020
  • Given the development of information technology, various methods for efficient information delivery have been constructed as the method of delivering product information moves from offline and 2D to online and 3D. These attempts not only are about delivering product information in an online space where no real product exists but also play a crucial role in diversifying and revitalizing online shopping by providing virtual experiences to consumers. 360 product image is a photorealistic VR that allows a subject to be rotated and photographed to view objects in three dimensions. 360 product image has also attracted considerable attention considering that it can deliver richer information about an object compared with the existing still image photography. 360 product image is influenced by divergent production factors, and accordingly, a difference emerges in the responses of users. However, as the history of technology is short, related research is also insufficient. Therefore, this study aimed to grasp the responses of users, which vary depending on the type of products and the number of source images in the 360 product image process. To this end, a representative product among the product groups that can be frequently found in online shopping malls was selected to produce a 360 product image and experiment with 75 users. The emotional responses to the 360 product image were analyzed through an experimental questionnaire to which the semantic classification method was applied. The results of this study could be used as basic data to understand and grasp the sensitivity of consumers to 360 product image.

A Study on Distance Estimation in Virtual Space According to Change of Resolution of Static and Dynamic Image (가상현실공간에서 정적 및 동적 이미지의 해상도 변화에 따른 거리추정에 관한 연구)

  • Ryu, Jae-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.109-119
    • /
    • 2011
  • The virtual reality (VR) technology has been used as the application of architectural presentation or simulation tool in the field of industry. The high immersion and intuitive visual information are the great merits of design evaluation or environmental simulation when we are using the virtual environments. But the distortion of distance perception in VR is still a big problem when the accuracy of distance presentation is strictly required. For example, distance estimation is especially important when the virtual environments are applied to the presentational tool for evaluation the space design or planning in the field of architecture. If there are some perception error between the built space in real and represented space in virtual, the accurate design evaluation or modification of design is hard to be carried out during the design development stage. In this paper, we have carried out some experiments about distance estimation in the immersive virtual environments to verify the factors and their influence. We made a hypothesis that the lack of the information for the user in VR causes the different distance estimation from the real world because users are usually comfortable with moving fast and long distance in VR environments compared with moving slow and short distance in real space. So, we carried out basic experiment to prove our hypothesis that the lack of information makes subjects estimate the distance of walking in VR shorter compared with the same distance in real. Also, among the factors that probably affect the distance estimation in VR, we have verified the influence of the image resolution. The influence of resolution degradation of image on the distance estimation was verified with the condition of static and dynamic images. The results showed that the resolution has deep relation with the distance estimation. For example, the subject underestimated the distance at the lower resolution condition. We also found the methods of the making the lower resolution image could affect on the visual perception of subjects.

Immersive Multichannel Display for Virtual Reality (가상현실을 위한 몰입형 멀티채널 디스플레이)

  • Kim, Sang Youn;Im, Sung Min;Kim, Do Yoon;Lee, Jae Hyub
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.6 no.3
    • /
    • pp.131-139
    • /
    • 2010
  • Virtual reality(VR) technology allows users to experience the same sensation as if they look and feel real objects, and furthermore it enables users to experience phenomena in virtual environment which are difficult to illustrate in real world. A multichannel display is one of the virtual reality systems for generating high-quality images and guaranteeing a wide view angle using multiple projectors. In this work, we present the multichannel display system whose resolution is $4096{\times}1536$. We implement an automatic calibration (geometric and color) method for compensating the distorted image and color. The results clearly show the proposed system provides continuous and smooth images.

Fast Measurement of Eyebox and Field of View (FOV) of Virtual and Augmented Reality Devices Using the Ray Trajectories Extending from Positions on Virtual Image

  • Hong, Hyungki
    • Current Optics and Photonics
    • /
    • v.4 no.4
    • /
    • pp.336-344
    • /
    • 2020
  • Exact optical characterization of virtual and augmented reality devices using conventional luminance measuring methods is a time-consuming process. A new measurement method is proposed to estimate in a relatively short time the boundary of ray trajectories emitting from a specific position on a virtual images. It is assumed that the virtual image can be modeled to be formed in front of one's eyes and seen through some optical aperture (field stop) that limits the field of view. Circular and rectangular shaped virtual images were investigated. From the estimated ray boundary, optical characteristics, such as the viewing direction and three dimensional range inside which a eye can observe the specified positions of the virtual image, were derived. The proposed method can provide useful data for avoiding the unnecessary measurements required for the previously reported method. Therefore, this method can be complementary to the previously reported method for reducing the whole measurement time of optical characteristics.