• Title/Summary/Keyword: sound information

Search Result 1,715, Processing Time 0.029 seconds

Sound Synthesis of Gayageum using TMS320C6713 DSK (TMS320C6713 DSK 를 이용한 가야금 사운드 합성)

  • Cho, Sang-Jin;Oh, Hoon;Chong, Ui-Pil
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.435-438
    • /
    • 2005
  • In this paper, we implemented a system that is called sound engine in musical synthesizer and synthesized a sound of Gayageum using TMS320C6713 DSK. Sound engine consists of two parts: synthesis algorithm and processor. We improved physical modeling using digital waveguide as a synthesis algorithm and we used TMS320C6713 as a processor. The excitation signals that make timbre are stored in memory. When we input parameters, sound engine synthesizes sound of Gayageum. The experimental result shows that synthesized sounds are very similar to real sounds.

  • PDF

The Design and Study of Virtual Sound Field in Music Production

  • Wang, Yan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.83-91
    • /
    • 2017
  • In this paper, we propose a thorough solution for adjusting virtual sound field with different kinds of devices and software in preliminary procedure and late stage of music processing. The basic process of music production includes composing, arranging and recording at pre-production stage as well as sound mixing and mastering at post-production stage. At the initial stage of music creation, it should be checked whether the design of virtual sound field, the choice of the tone and the instrument used in the arrangement match the virtual sound field required for the final work. In later recording, mixing and mastering, elaborate adjustments should be done to the virtual sound field. This study also analyzed how to apply the parameter of the effectors to the design and adjustment of the virtual sound field, making it the source of our creation.

Real-time Sound Control Method Based on Reflection and Diffraction of Sound in Virtual Environment (가상 환경에서 사운드의 반사와 회절을 이용한 실시간 소리 제어 방법)

  • Park, Soyeon;Park, Seong-A;Kim, Jong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.269-271
    • /
    • 2021
  • 본 논문에서는 실제 현실에서 표현되는 소리의 특징인 파동(Sound wave)과 흐름(Sound flow) 그리고 회절(Diffraction of sound)을 가상환경에서 실시간으로 표현할 수 있는 방법을 제안한다. 우리의 접근 방식은 소리가 재생되는 위치로부터 장애물 여부를 판단하고, 장애물이 존재할 시, 장애물로 인해 반사와 회절된 새로운 소리 위치를 계산한다. 이 과정에서 레이트레이싱 기반으로 장애물과의 충돌 여부를 판단하고, 충돌에 의해 굴절된 벡터를 이용하여 장애물 너머에서 들리는 소리의 크기를 계산하며, 충돌된 레이의 개수에 따라 소리의 크기를 감쇠시킨다. 본 논문에서 제안하는 방법을 이용한 소리의 회절은 물리 기반 접근법에서 나타나는 회절 형태를 실시간으로 표현했으며, 장애물에 따라서 회절 패턴이 변경되고, 이에 따라 소리의 크기가 자연스럽게 조절되는 결과를 보여준다. 이 같은 실험은 실제 현실에서 나타나는 소리의 퍼짐과 같은 특징을 거의 유사하게 복원해냈다.

  • PDF

Stress Detection and Classification of Laying Hens by Sound Analysis

  • Lee, Jonguk;Noh, Byeongjoon;Jang, Suin;Park, Daihee;Chung, Yongwha;Chang, Hong-Hee
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.28 no.4
    • /
    • pp.592-598
    • /
    • 2015
  • Stress adversely affects the wellbeing of commercial chickens, and comes with an economic cost to the industry that cannot be ignored. In this paper, we first develop an inexpensive and non-invasive, automatic online-monitoring prototype that uses sound data to notify producers of a stressful situation in a commercial poultry facility. The proposed system is structured hierarchically with three binary-classifier support vector machines. First, it selects an optimal acoustic feature subset from the sound emitted by the laying hens. The detection and classification module detects the stress from changes in the sound and classifies it into subsidiary sound types, such as physical stress from changes in temperature, and mental stress from fear. Finally, an experimental evaluation was performed using real sound data from an audio-surveillance system. The accuracy in detecting stress approached 96.2%, and the classification model was validated, confirming that the average classification accuracy was 96.7%, and that its recall and precision measures were satisfactory.

Optimal Acoustic Sound Localization System Based on a Tetrahedron-Shaped Microphone Array (정사면체 마이크로폰 어레이 기반 최적 음원추적 시스템)

  • Oh, Sangheon;Park, Kyusik
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.13-26
    • /
    • 2016
  • This paper proposes a new sound localization algorithm that can improve localization performance based on a tetrahedron-shaped microphone array. Sound localization system estimates directional information of sound source based on the time delay of arrival(TDOA) information between the microphone pairs in a microphone array. In order to obtain directional information of the sound source in three dimensions, the system requires at least three microphones. If one of the microphones fails to detect proper signal level, the system cannot produce a reliable estimate. This paper proposes a tetrahedron- shaped sound localization system with a coordinate transform method by adding one microphone to the previously known triangular-shaped system providing more robust and reliable sound localization. To verify the performance of the proposed algorithm, a real time simulation was conducted, and the results were compared to the previously known triangular-shaped system. From the simulation results, the proposed tetrahedron-shaped sound localization system is superior to the triangular-shaped system by more than 46% for maximum sound source detection.

Phase Characteristics of Approximated Head-related Transfer Functions(HRTFS) Using IIR Filters on the Sound Localization

  • Kanazawa, Kenichi;Hasegawa, Hiroshi;Kasuga, Masao;Matsumoto, Shuichi;Koike, Atsushi;Yamamoto, Hideo
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.237-240
    • /
    • 2000
  • We have proposed a simple method based on IIR filters for realizing sound image localization. How-ever the nonlinearity of phase characteristics of the IIR filters, which are used for sound image localization, cause decrease of the localization accuracy. In this paper we investigate the influence of phase characteristics on the sound localization. Head-related transfer functions (HRTFs) of a dummy-head are approximated by the IIR filter. We carried out sound image localization experiment with 2-loudspeaker reproduction using the approximated HRTFs. Then the errors which obtained from experiments were compared with the theoretical values which were estimated from the phase shifts of the IIR filters. As a result there was little influence of the nonlinear phase characteristics of the IIR fitters in the localization on the horizontal plane.

  • PDF

Synthesis of 3D Sound Movement by Embedded DSP

  • Komata, Shinya;Sakamoto, Noriaki;Kobayashi, Wataru;Onoye, Takao;Shirakawa, Isao
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.117-120
    • /
    • 2002
  • A single DSP implementation of 3D sound movement is described. With the use of a realtime 3D acoustic image localization algorithm, an efficient approach is devised for synthesizing the 3D sound movement by interpolating only two parameters of "delay" and "gain". Based on this algorithm, the realtime 3D sound synthesis is performed by a commercially available 16-bit fixed-point DSP with computational labor of 65 MIPS and memory space of 9.6k words, which demonstrates that the algorithm call be used even for the mobile applications.

  • PDF

Low Power DSP Implementation of 3D Sound Localization

  • Sakamoto, Noriaki;Kobayashi, Wataru;Onoye, Takao;Shirakawa, Isao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.253-256
    • /
    • 2000
  • This paper describes a DSP implementation of a real-time 3D sound localization algorithm with the use of a low power embedded DSP. A distinctive feature of this implementation is that the audible frequency band is divided into three, in accordance with the sound reflection and diffraction phenomena through different media from a certain sound source to human ears, and then in each subband a specific implementation procedure of the 3D sound localization is devised so as to operate real-time at a low frequency of 50MHz on a 16bit fixed-point DSP. Thus out DSP implementation can provide a listener with 3D sound effects through a headphone at low cost and low power consumption.

  • PDF

3D Acoustic Image Localization Algorithm by Embedded DSP

  • Kobayshi, Wataru;Sakamoto, Noriaki;Onoye, Takao;Shirakawa, Isao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.264-267
    • /
    • 2000
  • This paper describes a real-time 3D sound localization algorithm to be implemented with the use of a Bow power embedded DSP. This algorithm first divides the audible frequency band into three, on the basis of the analysis of the sound reflection and diffraction effects through different media from a certain sound source to human ears, and then in each subband a specific procedure is devised fur the 3D sound localization so as to operate real-time on a low power embedded DSP This algorithm aims at providing a listener with the 3D sound effects through a headphone at low cost and low power consumption.

  • PDF

Aurally Relevant Analysis by Synthesis - VIPER a New Approach to Sound Design -

  • Daniel, Peter;Pischedda, Patrice
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.1009-1009
    • /
    • 2003
  • VIPER a new tool for the VIsual PERception of sound quality and for sound design will be presented. Requirement for the visualization of sound quality is a signal analysis modeling the information processing of the ear. The first step of the signal processing implemented in VIPER, calculates an auditory spectrogram by a filter bank adapted to the time- and frequency resolution of the human ear. The second step removes redundant information by extracting time- and frequency contours from the auditory spectrogram in analogy to contours of the visual system. In a third step contours and/or auditory spectrogram can be resynthesised confirming that only aurally relevant information were extracted. The visualization of the contours in VIPER allows intuitively to grasp the important components of a signal. Contributions of parts of a signal to the overall quality can be easily auralized by editing and resynthesising the contours or the underlying auditory spectrogram. Resynthesis of time contours alone allows e.g. to auralize impulsive components separately from the tonal components. Further processing of the contours determines tonal parts in form of tracks. Audible differences between two versions of a sound can be visually inspected in VIPER through the help of auditory distance spectrograms. Applications are shown for the sound design of several interior noises of cars.

  • PDF