• Title/Summary/Keyword: VR sound

Search Result 41, Processing Time 0.026 seconds

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.

Reality Enhancement Method of Virtual Reality Based Simulator by Mutual Synergy Effect between Stereoscopic Image and Three-Dimensional Sound (입체영상과 3차원음향의 상호 상승효과에 의한 가상현실기반 시뮬레이터 현실감 증대방법)

  • Yim, Jeong-Bin;Kim, Hyeon-Ra
    • Journal of Navigation and Port Research
    • /
    • v.27 no.2
    • /
    • pp.145-153
    • /
    • 2003
  • The presence-feeling enhancement method of a Virtual Reality (VR) simulator is proposed in this paper. The method is to increase realistic human feeling by mutual synergy effect between stereoscopic image and three-dimensional (3D) sound. In order to test the influence of mutual synergy effect, subject assessment with five university students is carried out using VR ship simulator having PC monitor and LCD shutter glasses. It I found that the averaged scale value of image naturalness is increased by 0.5 from $I_{nat}$=3.1 to 3.6 when blending stereoscopic images with 3D sound, and the averaged score value of sound localization is increased by 10% from $A_{SL}$ = 70~75% to $A_{SL}$ = 80~85% when blending 3D sound with stereoscopic image. In conclusion, the results show that the proposed method is able to increase the presence feeling in the VR simulator.

A Study on Visual and Auditory Inducement of VR Image Contents and the Inducement Components of for Immersion Improvement (몰입감 향상을 위한 VR 영상 콘텐츠의 시청각 유도와 구성요소에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.14 no.11
    • /
    • pp.495-500
    • /
    • 2016
  • Since 2016, the VR market has been on the rapid growth. The most critical and arising issue in the VR market is VR contents. That is because it is necessary to develop making techniques and various VR contents to satisfy users' immersion and interaction as much as possible. Therefore, this study focused on VR image contents, conducted domestic and foreign cases of the components of visual and auditory inducement to keep and improve immersion, and thereby tried to find a right direction of visual and auditory inducement. As a result, the visual and auditory components of visual and auditory inducement were found to be photographing, edition, lighting, stitching, graphics, effect, voice actor's narration, dubbing, character voice, background sound, and sound effect; its technical and content components were found to be photographing technique, edition technique, lighting, stitching, graphics and effect, sound and sound effect, and theatric direction based on Mise-en-Scene, lines and narration of characters, and movements of characters and objets. For VR image contents, not only visual and auditory components, but technical and content components are necessary to improve immersion. In the future, it will be necessary to continue to research them.

A Study for Change of Audio Data according to Rotation Degree of VR Video (VR 영상의 회전각도에 따른 오디오 데이터 변화에 관한 연구)

  • Ko, Eun-Ji;Yang, Ji-Hee;Kim, Young-Ae;Park, Goo-Man;Kim, Seong-Kweon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.6
    • /
    • pp.1135-1142
    • /
    • 2017
  • In this paper, we propose an algorithm that can automatically mix the screen and sound by tracking the change of the sound data according to the screen change so that the real sound can be implemented in the personal broadcasting service. Since the personal broadcasting service is often broadcasted lively, it should be convenient to have a real-time mixing. Through experiments, it was confirmed that the sound pressure changes in a wide range in the high frequency band related to the clarity for understanding according to the rotation angle change of the screen. Regression analysis of the sound pressure changes at 2kHz, 4kHz, and 8kHz, The attenuation change of sound pressure was observed at the slope of -1.17, the slope of -2.0, and the slope of -2.44 for each frequency. Therefore, these experiment results can be applied to the VR service. This study is expected to be useful data in the implementation of personal broadcasting service.

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

Development of the VR Simulation System for the Study of Driver's Perceptive Response (운전자 인지반응 연구를 위한 VR 시뮬레이션 시스템 개발)

  • Jang, Suk;Kwon, Seong-Jin;Chun, Jee-Hoon;Cho, Ki-Yong;Suh, Myung-Won
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.2
    • /
    • pp.149-156
    • /
    • 2005
  • In this paper, the VR(Virtual Reality) simulation system is developed to analyze driver's perceptive response on the ASV(Advanced Safety Vehicle). The ASV is the vehicle of next generation equipped with various warning systems. For the purpose, the VR simulation system consists of VR database, vehicle dynamic model, graphic/sound system, and driving system. The VR database which generates 3D graphic and sound information is organized for the driving reality. Mathematical models of vehicle dynamic analysis are constructed to represent the dynamic behavior of a vehicle. The driving system and the graphic/sound system provide a driver with the operation of a vehicle and the feedback of a driving situation. Also, the real-time simulation algorithm synchronizes the vehicle dynamic model with the VR database. To check the validity of the developed system, a simple scenario is applied to investigate driver's perceptive response time and vehicle acceleration on an emergency situation. It is confirmed that the proposed system is useful and helpful to design the FVCWS(Forward Vehicle Collision Warning System).

Real-Time 3D Sound Rendering System Implementation for Virtual Reality Environment (VR 환경을 위한 실시간 음장 재현 시스템 구현)

  • Chae, Soo-Bok;Bhang, Seung-Beum;Hwang, Shin;Ko, Hee-Dong;Kim, Soon-Hyob
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.222-227
    • /
    • 2000
  • 본 논문은 VR시스템에서 실시간으로 3D Sound를 Rendering하기 위한 음장재현 시스템구현에 관한 것이다. 2개의 Speaker 또는 헤드폰을 사용하여 음장을 제어할 수 있다. 음장 제어는 레이 트레이싱(Ray Tracing)기법을 이용하여 음장을 시뮬레이션하고 가상 공간의 음장 파라미터를 추출하여 음원에 적용하면서 실시간으로 음장효과를 렌더링한다. 이 시스템은 펜티엄-II 333MHz, 128M RAM, SoundBlaster SoundCard를 사용한 시스템에서 구현하였다. 최종적으로 청취자는 2개의 스피커 또는 헤드폰을 이용하여 3D 음장을 경험하게 된다.

  • PDF

A Study on Improving the Reality of the Vehicle Simulator Based on the Virtual Reality (가상현실 기반의 차량 시뮬레이터의 현실감 향상에 관한 연구)

  • Choi Young-Il;Kwon Seong-Jin;Jang Suk;Kim Kyu-Hee;Cho Ki-Yong;Suh Myung-Won
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.8 s.227
    • /
    • pp.1116-1124
    • /
    • 2004
  • In these days, a vehicle simulator has been developed with a VR(Virtual Reality) system. A VR system must provide a vehicle simulator with natural interaction, sufficient immersion and realistic images. In addition, a VR system must present a driver with the realistic driving situation. To achieve these, it is important to obtain a fast and uniform rendering performance regardless of the complexity of virtual worlds. In this paper, the factors to improve the reality for the VR based vehicle simulator have been investigated. For the purpose, the modeling and the rendering methods which offer an improved performance for complex VR applications as the 3D road model have been implemented and verified. Then, we experiment on the influence of graphic and sound factors to the driver, and analyze each result for improving the reality such as the driver's viewport, the form of texture, the lateral distance of the side object, and the sound effect. These factors are evaluated on the driving system which is constructed for qualitative analysis. The research results could be used for improving the reality of the VR based vehicle simulator.

VR Journalism's Image Text Analysis - Based on The New York Times' (VR(Virtual Reality) 저널리즘의 영상텍스트 분석 - 뉴욕타임즈의 <난민(THE DISPLACED)>을 중심으로)

  • Park, Man Su;Han, Dong Sub
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.9
    • /
    • pp.173-183
    • /
    • 2017
  • In this research, analysis based on VR journalism outlet the New York Times' was carried out. The image analysis of was done through the frames of angle, shot (size, length, movement), and limited user-directed interaction (point, sound). The result of this is as follows. Firstly, the direction was done using a basis of normal and low angles. Secondly, it was able to be confirmed that the shooting was done in order by medium, full, and long shot. Thirdly, with regard to the length of the shot, most direction was done through long takes. Fourthly, most images came to consist of fixed shots. Lastly, this is limited user-directed interaction. This may be separated into 2 aspects: sound, and movement of the independent free agent. Through these, interaction was guided through free point of view concerning realistic situations to point of view guidance and users. This research may be referred to as foundational research for the further advancement of in-depth discussion pertaining to VR journalism.

A Study on Real -Tine 3B Sound Rendering System for Virtual Reality Environment (VR 환경에 적합한 실시간 음장 재현 시스템에 관한 연구)

  • Chae SooBok;Bhang SungBeum;Hwang Shin;Ko HeeDong;Kim SoonHyob
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.353-356
    • /
    • 2000
  • 본 논문은 VR시스템에서 실시간으로 3D Sound를 Rendering하기 위한 음장재현 시스템구현에 관한 것이다. 2개의 Speaker 또는 헤드폰을 사용하여 음장을 제어할 수 있다. 음장 제어는 레이 트레이싱(Ray Tracing)기법을 이용하여 음장을 시뮬레이션하고 가상 공간의 음장 파라미터를 추출하여 음원에 적용하면서 실시간으로 음장효과를 렌더링한다. 이 시스템은 펜티엄-II 333MHz, 128M RAM, Sound Blaster Sound Card를 사용한 시스템에서 구현하였다. 최종적으로 청취자는 2개의 스피커 또는 헤드폰을 이용하여 3D 음장을 경험하게 된 다.

  • PDF