• Title/Summary/Keyword: Sound Signal Generation

Search Result 28, Processing Time 0.023 seconds

A comprehensive design cycle for car engine sound: from signal processing to software component to be integrated in the audio system of the vehicle

  • Orange, Francois;Boussard, Patrick
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2012.04a
    • /
    • pp.208-209
    • /
    • 2012
  • This paper describes a comprehensive process and range of design tools and components for providing Improved perception of engine sound for mass production vehicles by the generation of finely tuned engine harmonics.

  • PDF

Study on the Development of Integrated Vibration and Sound Generator (휴대폰용 일체형 음향 및 진동 발생장치 개발을 위한 연구)

  • 신태명;안진철
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.13 no.11
    • /
    • pp.875-881
    • /
    • 2003
  • The received signal of a mobile phone is normally sensed through two independent means which are the sound generation of a speaker and vibration generation of a vibration motor. As an improvement scheme to meet the consumer's demand on weight reduction and miniaturization of a mobile phone, the design and development of an integrated vibration and sound generating device are performed in this research. To this purpose, the optimal shapes of the voice coil. the permanent magnet and the vibration plate are designed, and the excitation force applied to the vibration system of the new device is estimated and verified through theoretical analyses, computer simulation, and experiments using an expanded model. In addition, vibration performance comparison of the device with the existing vibration motor is performed, and from the overall process, therefore, the method and procedure for the vibration performance analysis of the integrated vibration and sound generating device are established.

Software-based Simple Lock-in Amplifier and Built-in Sound Card for Compact and Cost-effective Terahertz Time-domain Spectroscopy System

  • Yu-Jin Nam;Jisoo Kyoung
    • Current Optics and Photonics
    • /
    • v.7 no.6
    • /
    • pp.683-691
    • /
    • 2023
  • A typical terahertz time-domain spectroscopy system requires large, expensive, and heavy hardware such as a lock-in amplifier and a function generator. In this study, we replaced the lock-in amplifier and the function generator with a single sound card built into a typical desktop computer to significantly reduce the system size, weight, and cost. The sound card serves two purposes: 1 kHz chopping signal generation and raw data acquisition. A unique software lock-in (Python coding program to eliminate noise from raw data) method was developed and successfully extracted THz time-domain signals with a signal-to-noise ratio of ~40,000 (the intensity ratio between the peak and average noise levels). The built-in sound card with the software lock-in method exhibited sufficiently good performance compared with the hardware-based method.

A Study on the Design of Inaudible Acoustic Signal in Acoustic Communications and Positioning System (음향 통신 및 위치측정 시스템에서의 비가청 음향 신호 설계에 관한 연구)

  • Oh, Jongtaek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.191-197
    • /
    • 2017
  • According to the ubiquitous usage of smartphone, so many smartphone applications have been developed, and especially data communications and position measurement technologies without additional equipments have been developed using acoustic signal. But there is a limitation to select the frequency of the acoustic signal due to the smartphone hardware, and there is non-linearity in the electronic circuits in a sound generation devices, the audible sound generated from the speaker is not avoidable. And it causes critical difficulty to the commercial system deployment. In this paper, a simulation technique to calculate the power of the audible acoustic signal by human is applied to several types of acoustic signals to evaluate the loudness. These could be referred when the acoustic communications or positioning systems are designed, for the purposed of inaudible sounding to human.

Enhanced Sound Signal Based Sound-Event Classification (향상된 음향 신호 기반의 음향 이벤트 분류)

  • Choi, Yongju;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.193-204
    • /
    • 2019
  • The explosion of data due to the improvement of sensor technology and computing performance has become the basis for analyzing the situation in the industrial fields, and various attempts to detect events based on such data are increasing recently. In particular, sound signals collected from sensors are used as important information to classify events in various application fields as an advantage of efficiently collecting field information at a relatively low cost. However, the performance of sound-event classification in the field cannot be guaranteed if noise can not be removed. That is, in order to implement a system that can be practically applied, robust performance should be guaranteed even in various noise conditions. In this study, we propose a system that can classify the sound event after generating the enhanced sound signal based on the deep learning algorithm. Especially, to remove noise from the sound signal itself, the enhanced sound data against the noise is generated using SEGAN applied to the GAN with a VAE technique. Then, an end-to-end based sound-event classification system is designed to classify the sound events using the enhanced sound signal as input data of CNN structure without a data conversion process. The performance of the proposed method was verified experimentally using sound data obtained from the industrial field, and the f1 score of 99.29% (railway industry) and 97.80% (livestock industry) was confirmed.

Underwater Acoustic Research Trends with Machine Learning: Passive SONAR Applications

  • Yang, Haesang;Lee, Keunhwa;Choo, Youngmin;Kim, Kookhyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.3
    • /
    • pp.227-236
    • /
    • 2020
  • Underwater acoustics, which is the domain that addresses phenomena related to the generation, propagation, and reception of sound waves in water, has been applied mainly in the research on the use of sound navigation and ranging (SONAR) systems for underwater communication, target detection, investigation of marine resources and environment mapping, and measurement and analysis of sound sources in water. The main objective of remote sensing based on underwater acoustics is to indirectly acquire information on underwater targets of interest using acoustic data. Meanwhile, highly advanced data-driven machine-learning techniques are being used in various ways in the processes of acquiring information from acoustic data. The related theoretical background is introduced in the first part of this paper (Yang et al., 2020). This paper reviews machine-learning applications in passive SONAR signal-processing tasks including target detection/identification and localization.

Directional Characteristics of Parametric Loudspeakers in Near-field (파라메트릭 스피커의 근접음장 방향성 특성연구)

  • Ju, Hyeong-Sick;Kim, Yang-Hann
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.545-550
    • /
    • 2005
  • A parametric loudspeaker is a device to generate highly directional sound using ultrasounds. The parametric loudspeaker could be used to focus sound in a limited space, so it is important to study the characteristics of the parametric loudspeaker in near-field. Mechanism of the audible sound generation in the parametric loudspeaker is explained by nonlinear interaction of the ultrasounds and is modeled as KZK equation, the nonlinear wave equation which contains attenuation, nonlinearity and diffraction. To measure the directional characteristics of the parametric loudspeaker precisely, a method to reduce the spurious signal which taints the measured signal was invented. With the method, directivity patterns of the parametric loudspeaker were measured and compared to the approximated solution and piston sources.

  • PDF

A study on the application of residual vector quantization for vector quantized-variational autoencoder-based foley sound generation model (벡터 양자화 변분 오토인코더 기반의 폴리 음향 생성 모델을 위한 잔여 벡터 양자화 적용 연구)

  • Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.243-252
    • /
    • 2024
  • Among the Foley sound generation models that have recently begun to be studied, a sound generation technique using the Vector Quantized-Variational AutoEncoder (VQ-VAE) structure and generation model such as Pixelsnail are one of the important research subjects. On the other hand, in the field of deep learning-based acoustic signal compression, residual vector quantization technology is reported to be more suitable than the conventional VQ-VAE structure. Therefore, in this paper, we aim to study whether residual vector quantization technology can be effectively applied to the Foley sound generation. In order to tackle the problem, this paper applies the residual vector quantization technique to the conventional VQ-VAE-based Foley sound generation model, and in particular, derives a model that is compatible with the existing models such as Pixelsnail and does not increase computational resource consumption. In order to evaluate the model, an experiment was conducted using DCASE2023 Task7 data. The results show that the proposed model enhances about 0.3 of the Fréchet audio distance. Unfortunately, the performance enhancement was limited, which is believed to be due to the decrease in the resolution of time-frequency domains in order to do not increase consumption of the computational resources.

Headphone-based multi-channel 3D sound generation using HRTF (HRTF를 이용한 헤드폰 기반의 다채널 입체음향 생성)

  • Kim Siho;Kim Kyunghoon;Bae Keunsung;Choi Songin;Park Manho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.71-77
    • /
    • 2005
  • In this paper we implement a headphone-based 5.1 channel 3-dimensional (3D) sound generation system using HRTF (Head Related Transfer Function). Each mono sound source in the 5.1 channel signal is localized on its virtual location by binaural filtering with corresponding HRTFs, and reverberation effect is added for spatialization. To reduce the computational burden, we reduce the number of taps in the HRTF impulse response and model the early reverberation effect with several tens of impulses extracted from the whole impulse sequences. We modified the spectrum of HRTF by weighing the difference of front-back spec01m to reduce the front-back confusion caused by non-individualized HRTF DB. In informal listening test we can confirm that the implemented 3D sound system generates live and rich 3D sound compared with simple stereo or 2 channel down mixing.

Vibration Stimulus Generation using Sound Detection Algorithm for Improved Sound Experience (사운드 실감성 증진을 위한 사운드 감지 알고리즘 기반 촉각진동자극 생성)

  • Ji, Dong-Ju;Oh, Sung-Jin;Jun, Kyung-Koo;Sung, Mee-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.158-162
    • /
    • 2009
  • Sound effects coming with appropriate tactile stimuli can strengthen its reality. For example, gunfire in games and movies, if it is accompanied by vibrating effects, can enhance the impressiveness. On a similar principle, adding the vibration information to existing sound data file and playing sound while generating vibration effects through haptic interfaces can augment the sound experience. In this paper, we propose a method to generate vibration information by analyzing the sound. The vibration information consists of vibration patterns and the timing within a sound file. Adding the vibration information is labor-intensive if it is done manually. We propose a sound detection algorithm to search the moments when specific sounds occur in a sound file and a method to create vibration effects at those moments. The sound detection algorithm compares the frequency characteristic of specific sounds and finds the moments which have similar frequency characteristic within a sound file. The detection ratio of the algorithm was 98% for five different kinds of gunfire. We also develop a GUI based vibrating pattern editor to easily perform the sound search and vibration generation.

  • PDF