• 제목/요약/키워드: Voice, Sound

Search Result 336, Processing Time 0.022 seconds

Development of a sow voice analysis system for forecasting parturition time (임신돈의 분만시기 예측을 위한 음성 분석 시스템 개발)

  • Chang, Dong Il;Lim, Zung Taek
    • Korean Journal of Agricultural Science
    • /
    • v.27 no.2
    • /
    • pp.107-116
    • /
    • 2000
  • Pure voice characteristics of sow were analyzed to predict parturition time. These were analyzed by using oscilloscope and Sound Forge and the results showed that the voice frequency and amplitude of sow were in the range of 30~2,500Hz and -35~-75dB. According to the sound analysis results, the frequencies of sound appearance from farrowing sow in the farrowing pen for three days prior to delivery day and eight hours of prior to time were around 85% and 46%, respectively of the total appearance during eight days to delivery. Forecasting of delivery time of farrowing sow using the number of sound occurrences showed a promising result such that those have been increased whenever the delivery time was approached. The forecasting success rates were 100% for both of one day and six hours prior to the actual delivery.

  • PDF

Analysis of Subjective Sound Quality Characteristics for the HVAC using the Design of Experiments : Sharp, Annoy (실험계획법을 이용한 차량공조시스템의 음질 특성 분석)

  • Yun, Tae-Kun;Sim, Hyun-Jin;Lee, Jung-Youn;Oh, Jae-Eung;Kim, Sung-Soo
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.05a
    • /
    • pp.634-637
    • /
    • 2005
  • A subjective index of sound quality when it hit him is required since human listening is very sensitive and complex. Sound quality evaluation it leads consequently rightly in each situation and it composes a sound quality factor. But one of the levels in interest frequency range is substitute we cannot see the tendency of frequency substitute at whole that is executes a clear voice evaluation. Design of experiment is used and dividing 12 equally in frequency domain, the sound quality using sharpness and annoyance is performed by modifying each of frequency domains. Design of experiment method reduces much number experiment very effectively and each main effect of domain solution analysis, such as a case of sharpness and annoyance, the change of domain (increase and decrease of sound pressure level, or change nil) can grasp a type of effect should have influenced to a sound quality, and it will be able to select the objective frequency domain which hits to the sound quality. Through these obtained results the physical changes of level at arbitrary frequency domain sensitivity can be adapted.

  • PDF

An ACLMS-MPC Coding Method Integrated with ACFBD-MPC and LMS-MPC at 8kbps bit rate. (8kbps 비트율을 갖는 ACFBD-MPC와 LMS-MPC를 통합한 ACLMS-MPC 부호화 방식)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.1-7
    • /
    • 2018
  • This paper present an 8kbps ACLMS-MPC(Amplitude Compensation and Least Mean Square - Multi Pulse Coding) coding method integrated with ACFBD-MPC(Amplitude Compensation Frequency Band Division - Multi Pulse Coding) and LMS-MPC(Least Mean Square - Multi Pulse Coding) used V/UV/S(Voiced / Unvoiced / Silence) switching, compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. In integrating several methods, it is important to adjust the bit rate of voiced and unvoiced sound source to 8kbps while reducing the distortion of the speech waveform. In adjusting the bit rate of voiced and unvoiced sound source to 8 kbps, the speech waveform can be synthesized efficiently by restoring the individual pitch intervals using multi pulse in the representative interval. I was implemented that the ACLMS-MPC method and evaluate the SNR of APC-LMS in coding condition in 8kbps. As a result, SNR of ACLMS-MPC was 15.0dB for female voice and 14.3dB for male voice respectively. Therefore, I found that ACLMS-MPC was improved by 0.3dB~1.8dB for male voice and 0.3dB~1.6dB for female voice compared to existing MPC, ACFBD-MPC and LMS-MPC. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or internet phone. In the future, I will study the evaluation of the sound quality of 6.9kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source.

The Correlation between Speech Intelligibility and Acoustic Measurements in Children with Speech Sound Disorders (말소리장애 아동의 말명료도와 음향학적 측정치 간 상관관계)

  • Kang, Eunyeong
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.6 no.4
    • /
    • pp.191-206
    • /
    • 2018
  • Purpose : This study investigated the correlation between speech intelligibility and acoustic measurements of speech sounds produced by the children with speech sound disorders and children without any diagnosed speech sound disorder. Methods : A total of 60 children with and without speech sound disorders were the subjects of this study. Speech samples were obtained by having the subjects? speak meaningful words. Acoustic measurements were analyzed on a spectrogram using the Multi-speech 3700 program. Speech intelligibility was determined according to a listener's perceptual judgment. Results : Children with speech sound disorders had significantly lower speech intelligibility than those without speech sound disorders. The intensity of the vowel /u/, the duration of the vowel /${\omega}$/, and the second formant of the vowel /${\omega}$/ were significantly different between both groups. There was no difference in voice onset time between the groups. There was a correlation between acoustic measurements and speech intelligibility. Conclusion : The results of this study showed that the speech intelligibility of children with speech sound disorders was affected by intensity, word duration, and formant frequency. It is necessary to complement clinical setting results using acoustic measurements in addition to evaluation of speech intelligibility.

The sound analysis of (<이야기 속의 이야기> 사운드 분석)

  • Mok, Hae-Jung
    • Cartoon and Animation Studies
    • /
    • s.20
    • /
    • pp.87-104
    • /
    • 2010
  • Animation creates meaning and affection by combinig image and sound like film. directed by Yuri Norstein is a good text for analyzing animation sound in that it combines image and various music and sound effects well. This study focuses on analyzing the way that sound function to make meaning in this text. Generally sound is categorized into dialogue, music, and sound effect. And animation has its own characteristic in each category. The voice for dialogue is created corresponding to the image of the character and the rhythm is very important in Animation. Plus Sound effect in animation can be said to mimic not just sound but also movement. This study analyzes sound based on three sound factors and the concepts of the point of listening, subjective sound, and sound bridge. Subjective sound using the point of listening of the wolf and the baby bestows a special position on the main characters in the text. It is the overall characteristic of the sound use of this text that the repetitive combination of sound and image, the linguistic and annotative function of sound effect, and comparatively conventional use of music and sound effect enhance the affection and readability.

  • PDF

A Study on the Gender and Age Classification of Speech Data Using CNN (CNN을 이용한 음성 데이터 성별 및 연령 분류 기술 연구)

  • Park, Dae-Seo;Bang, Joon-Il;Kim, Hwa-Jong;Ko, Young-Jun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.11-21
    • /
    • 2018
  • Research is carried out to categorize voices using Deep Learning technology. The study examines neural network-based sound classification studies and suggests improved neural networks for voice classification. Related studies studied urban data classification. However, related studies showed poor performance in shallow neural network. Therefore, in this paper the first preprocess voice data and extract feature value. Next, Categorize the voice by entering the feature value into previous sound classification network and proposed neural network. Finally, compare and evaluate classification performance of the two neural networks. The neural network of this paper is organized deeper and wider so that learning is better done. Performance results showed that 84.8 percent of related studies neural networks and 91.4 percent of the proposed neural networks. The proposed neural network was about 6 percent high.

A Study of Enemy Aptitude of Pistol Sound Source for Space Estimation (공간평가를 위한 피스톨음원의 적정성에 관한 연구)

  • Shon, Jang-Ryul;Kim, Jung-Joong
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.15 no.3 s.96
    • /
    • pp.320-328
    • /
    • 2005
  • Last target of architectural acoustics is that people wish to convey voice effectively from the space adaptively in use purpose in building. But, how exactly through space sound (sound source) that wish to deliver from indoor can be passed method to do quantification and evaluate quantity of sound by method to serve indoor architectural acoustics estimation summer period and methods to estimate definition propose. This Study searches special quality of sound source about MLS signal that is occurred short-answer sound source (pistol sound source) and nondirectional speaker among indoor sound estimation method, and measure and analyzed reverberation time (RT60), definition (C80, D50) by regulation of each ISO 3382 in age place (classroom, hall, gymnasium). Analysis result and sound factor among could know that d of two sound sources converges in measurement error extent about reverberation time (RT60) of analysis incidental and sound factors and value shows change irregularly about sound factor of D50, C80, pistol sound source judged there is problem. Also, could know that problem is happened in deflection except reverberation time is in deflection analysis with wave that measure each in fixed distance in branch. Finally, when differ size of sound source and measure about change of sound pressure level in case measure sound pressure level giving difference about 10 dB, sound factor could know that there is no different effect.

Study on the Development of Integrated Vibration and Sound Generator (휴대폰용 일체형 음향 및 진동 발생장치 개발을 위한 연구)

  • 신태명;안진철
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.13 no.11
    • /
    • pp.875-881
    • /
    • 2003
  • The received signal of a mobile phone is normally sensed through two independent means which are the sound generation of a speaker and vibration generation of a vibration motor. As an improvement scheme to meet the consumer's demand on weight reduction and miniaturization of a mobile phone, the design and development of an integrated vibration and sound generating device are performed in this research. To this purpose, the optimal shapes of the voice coil. the permanent magnet and the vibration plate are designed, and the excitation force applied to the vibration system of the new device is estimated and verified through theoretical analyses, computer simulation, and experiments using an expanded model. In addition, vibration performance comparison of the device with the existing vibration motor is performed, and from the overall process, therefore, the method and procedure for the vibration performance analysis of the integrated vibration and sound generating device are established.

Implementation of an Effective Rule Base System for the Change of Korean Vocal Sound (한국어 음운 변동 처리를 위한 효율적인 Rule Base System의 구성)

  • 이규영;이상범
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.12
    • /
    • pp.9-18
    • /
    • 1991
  • In this Paper, a rule-based method for the phenomenon of Korean vocal sound change is proposed. This method could be used to solve a problem between symbolic(Hangul)and phonetic language(Korean) for the study of Korean speech processing. A rule on the phenomenon of vocal sound rearranged for the rule base with a end-consonents on the authority of standard pronunciation rule. The proposed rule base system is simplified by the implementation for the vocal sound change. Also, it is useful to create the data base with phonetic value for the Korean voice processing by syllable unit.

  • PDF

Research on Machine Learning Rules for Extracting Audio Sources in Noise

  • Kyoung-ah Kwon
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.206-212
    • /
    • 2024
  • This study presents five selection rules for training algorithms to extract audio sources from noise. The five rules are Dynamics, Roots, Tonal Balance, Tonal-Noisy Balance, and Stereo Width, and the suitability of each rule for sound extraction was determined by spectrogram analysis using various types of sample sources, such as environmental sounds, musical instruments, human voice, as well as white, brown, and pink noise with sine waves. The training area of the algorithm includes both melody and beat, and with these rules, the algorithm is able to analyze which specific audio sources are contained in the given noise and extract them. The results of this study are expected to improve the accuracy of the algorithm in audio source extraction and enable automated sound clip selection, which will provide a new methodology for sound processing and audio source generation using noise.