• Title/Summary/Keyword: sound rendering

Search Result 37, Processing Time 0.024 seconds

Effect on Audio Play Latency for Real-Time HMD-Based Headphone Listening (HMD를 이용한 오디오 재생 기술에서 Latency의 영향 분석)

  • Son, Sangmo;Jo, Hyun;Kim, Sunmin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.141-145
    • /
    • 2014
  • A minimally appropriate time delay of audio data processing is investigated for rendering virtual sound source direction in real-time head-tracking environment under headphone listening. Less than 3.7 degree of angular mismatch should be maintained in order to keep desired sound source directions in virtually fixed while listeners are rotating their head in a horizontal plane. The angular mismatch is proportional to speed of head rotation and data processing delay. For 20 degree/s head rotation, which is a relatively slow head-movement case, less than total of 63ms data processing delay should be considered.

  • PDF

A Sound Externalization Method for Realistic Audio Rendering in a Headphone Listening Environment (헤드폰 청취환경에서의 실감 오디오 재현을 위한 음상 외재화 기법)

  • Kim, Yong-Guk;Chun, Chan-Jun;Kim, Hong-Kook;Lee, Yong-Ju;Jang, Dae-Young;Kang, Kyeong-Ok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.1-8
    • /
    • 2010
  • In this paper, a sound externalization method is proposed for out-of-the-head localization in a headphone listening environment. In order to reduce timbre distortion by the conventional methods using a measured a head-related transfer function (HRTF) or early reflections, the proposed method integrates a model-based HRTF with reverberation. In addition, for improving frontal externalization performance, techniques such as decorrelation and spectral notch filtering are included. To evaluate the performance of the proposed externalization method, subjective listening tests are conducted by using different types of sound sources such as white noise, sound effects, speech, and music. It is shown from the test results that the proposed externalization method can localize sound sources farther away from out of the head than the conventional method.

Design of Acoustic Source Array Using the Concept of Holography Based on the Inverse Boundary Element Method (역 경계요소법에 기초한 음향 홀로그래피 개념에 따른 음원 어레이 설계)

  • Cho, Wan-Ho;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.260-267
    • /
    • 2009
  • It is very difficult to form a desired complex sound field at a designated region precisely as an application of acoustic arrays, which is one of important objects of array systems. To solve the problem, a filter design method was suggested, which employed the concept of an inverse method using the acoustical holography based on the boundary element method. In the acoustical holography used for the source identification, the measured field data are employed to reconstruct the vibro-acoustic parameters on the source surface. In the analogous problem of source array design, the desired field data at some specific points in the sound field was set as constraints and the volume velocity at the surface points of the source plane became the source signal to satisfy the desired sound field. In the filter design, the constraints for the desired sound field are set, first. The array source and given space are modelled by the boundary elements. Then, the desired source parameters are inversely calculated in a way similar to the holographic source identification method. As a test example, a target field comprised of a quiet region and a plane wave propagation region was simultaneously realized by using the array with 16 loudspeakers.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

Amplitude Panning Algorithm for Virtual Sound Source Rendering in the Multichannel Loudspeaker System (다채널 스피커 환경에서 가상 음원을 생성하기 위한 레벨 패닝 알고리즘)

  • Jeon, Se-Woon;Park, Young-Cheol;Lee, Seok-Pil;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.4
    • /
    • pp.197-206
    • /
    • 2011
  • In this paper, we proposes the virtual sound source panning algorithm in the multichannel system. Recently, High-definition (HD) and Ultrahigh-definition (UHD) video formats are accepted for the multimedia applications and they provide the high-quality resolution pixels and the wider view angle. The audio format also needs to generate the wider sound field and more immersive sound effects. However, the conventional stereo system cannot satisfy the desired sound quality in the latest multimedia system. Therefore, the various multichannel systems that can make more improved sound field generation are proposed. In the mutichannel system, the conventional panning algorithms have acoustic problems about directivity and timbre of the virtual sound source. To solve these problems in the arbitrary positioned multichannel loudspeaker system, we proposed the virtual sound source panning algorithm using multiple vectors base nonnegative amplitude panning gains. The proposed algorithm can be easily controlled by the gain control function to generate an accurate localization of the virtual sound source and also it is available for the both symmetric and asymmetric loudspeakers format. Its performance of sound localization is evaluated by subjective tests comparing with conventional amplitude panning algorithms, e.g. VBAP and MDAP, in the symmetric and asymmetric formats.

A Technique of Applying Embedded Sensors to Intuitive Adjustment of Image Filtering Effect in Smart Phone (스마트폰에서 이미지 필터링 효과의 직관적 조정을 위한 내장센서의 적용 기법)

  • Kim, Jiyeon;Kwon, Sukmin;Jung, Jongjin
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.8
    • /
    • pp.960-967
    • /
    • 2015
  • In this paper, we propose a user interface technique based on embedded sensors applying to apps in smart phone. Especially, we implement avata generation application using image filtering technique for photo image in smart phone. In the application, The embedded sensors are used as intuitive user interface to adjust the image filtering effect for making user satisfied effect in real time after the system produced the image filtering effect for avatar. This technique provides not a simple typed method of parameter values adjustment but a new intuitively emotional adjustment method in image filtering applications. The proposed technique can use sound values from embedded mike sensor for adjusting key values of sketch filter effect if the smart phone user produces sound. Similiarly the proposed technique can use coordinate values from embedded acceleration sensor for adjusting masking values of oil painting filter effect and use brightness values from embedded light sensor for adjusting masking values of sharp filter effect. Finally, we implement image filtering application and evaluate efficiency and effectiveness for the proposed technique.

The Realtime method of 3D Sound Rendering for Virtual Reality : Complexity Reduction of Scene and Sound Sources (장면 및 음원 복잡도 축소에 의한 3차원 사운드 재현의 실시간화 기법)

  • Seong SukJeong;Yi JeongSeon;Oh SuJin;Nam YangHee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.550-552
    • /
    • 2005
  • 실감 재현이 중요한 가상현실 응용에서는 사용자에게 고급 그래픽 환경을 제시하고 사용자의 인터랙션에 즉각적인 피드백을 제공함으로서 실재감과 몰입감을 증대시키는 연구가 진행되어왔다. 실재감, 공간감 전달을 위해 시각과 청각을 함께 활용하는 것이 효과적이나, 가상공간의 특징을 반영한 3차원 사운도 재현 연구는 국내외 통틀어 초기단계에 머물러 있다. 실재감과 공간감을 반영한 3차원 사운드의 재현을 위해서는 음원의 전파, 반사, 잔향 풍의 계산이 사용자의 인터랙션에 따라 새롭게 계산되어야한다. 그러나 사운드 전파경로와 공간을 이루는 모든 폴리곤들과의 충돌을 검사하며 반사 등을 계산하는 것은 실시간성이 중요한 가상현실응용에서는 무리가 따르므로 실 시간성을 보장하기 위한 계산량 축소가 요구된다. 본 논문에서는 다수의 음원이 존재하는 복잡한 가상공간에서의 3차원 사운드를 재현하기 위하여 사운드 신과 계산에 필요한 최소한의 정보를 가지는 오디오 씬 그라프의 공간을 재구성하고 다수의 음원을 대상으로 음원 축소 및 군집화를 적용하여 3차원 사운드효과를 실시간으로 재현하는 알고리즘을 제안한다.

  • PDF

A Study on Real-Time 3D Sound Rendering for Virtual Reality Environment (VR환경에 알맞은 실시간 음장구현에 관한 연구)

  • Chae, Soo-Bok;Bhang, Seung-Beum;Shin, Hwang;Ko, Hee-Dong;Kim, Soon-Hyob
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.197-200
    • /
    • 2000
  • 본 논문은 VR시스템에 사용되는 실시간 음향제시를 위한 시스템 구현에 관한 것이다. 2개의 Speaker 또는 헤드폰을 사용하여 음상제어, 음장제어의 두 부분으로 구성되어 있다. 음상제어 부분은 각각의 음원의 위치를 정위하고, 음장제어 부분은 레이 트레이싱(Ray Tracing)기법을 이용하여 음장을 시뮬레이션하고 가상 공간의 음장 파라미터를 추출하여 음원에 적용하면서 실시간으로 음장효과를 렌더링 한다. 이 시스템은 펜티엄-Ⅱ333MHz 시스템에서 구현하였다. 최종적으로 청취자는 2개의 스피커 또는 헤드폰을 이용하여 3D음장을 경험하게 된다.

  • PDF

A Study on Realistic Sound Reproduction for UHDTV (UHDTV를 위한 실감 오디오 재현 기술)

  • Jang, Daeyoung;Seo, Jeongil;Lee, Yong Ju;Yoo, Jae-Hyoun;Park, Taejin;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.68-81
    • /
    • 2015
  • Owing to the latest development of component and media processing technologies, UHDTV as a successor of the HDTV is expected that this will be coming soon realization. Accordingly, an audio technology that provides a 5.1-channel surround sound in home should be contemplating on what services should be provided with the advent of UHDTV era. In fact, however, the market of 5.1-channel audio is struggling, due to the difficulty of installation and maintenance of the multi speakers in a home. Meanwhile, the movie sound market for a long time been used in 5.1 and 7.1-channel sound formats, have changed as Dolby ATMOS, IOSONO, AURO3D etc. are launched one after another with the introduction of hybrid audio technologies that include the ceiling and object-based sounds. This very object-based audio technology is assured to be introduced in the home theater and broadcast audio market, and this change in audio technology is expected to be a breath of pioneering technological advances and market growth from the channel-based audio market that lacks flexibility. In this paper, we will investigate a suitable realistic audio solution for UHDTV, and introduce hybrid audio technologies, which is expected to be an audio technology for UHDTV, and we will describe the hybrid audio content format and reproduction methods in a home and consider the future prospects of realistic audio.

A 3D Audio Broadcasting Terminal for Interactive Broadcasting Services (대화형 방송을 위한 3차원 오디오 방송단말)

  • Park Gi Yoon;Lee Taejin;Kang Kyeongok;Hong Jinwoo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.1 s.26
    • /
    • pp.22-30
    • /
    • 2005
  • We implement an interactive 3D audio broadcasting terminal which synthesizes an audio scene according to the request of a user. Audio scene structure is described by the MPEG-4 AudioBIFS specifications. The user updates scene attributes and the terminal synthesizes the corresponding sound images in the 3D space. The terminal supports the MPEG-4 Audio top nodes and some visual nodes. Instead of using sensor nodes and route elements, we predefine node type-specific user interfaces to support BIFS commands for field replacement. We employ sound spatialization, directivity/shape modeling, and reverberation effects for 3D audio rendering and realistic feedback to user inputs. We also introduce a virtual concert program as an application scenario of the interactive broadcasting terminal.