• Title/Summary/Keyword: Speech sound

Search Result 628, Processing Time 0.024 seconds

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

Investigation of acoustic performances of the creative convergence classrooms in elementary schools (초등학교 창의융합교실의 음향성능 조사)

  • A-Hyeon Jo;Chan-Hoon Haan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.285-297
    • /
    • 2023
  • The present study aims to investigate the acoustic performance of the creative convergence classrooms in Korea used by elementary school students under the age of 9 introduced through the school space innovation project. In order to do this, acoustic performances of three creative convergence classrooms were measured. The measured acoustic parameters were background noise levels, Reverberation Time (RT), D50, Speech Transmission Index (STI), and Inter-Aural Cross Correlation (IACC). Also, acoustic parameters including Transmission Loss (TL) and standardized level difference (DnT) have been measured for the analysis of sound insulation performance of walls. In addition, the noise level was measured according to the opening conditions of doors and windows in the classroom. As a result, background noise level was measured at an average of 28.0 dB(A) to 32.8 dB(A) when the air conditioner was not operated, and the RT did not exceed 0.6 s. There were differences in IACC according to various desk layouts, and IACC values were high in the center line and the seats near the sound source. In particular, higher IACC was measured at the seats on the center line facing the source squarely. Regarding noise level in the classroom according to the opening conditions of doors and windows, the standards were exceeded when all windows, or windows and doors front onto the corridor were opened.

An Arrangement Method of Voice and Sound Feedback According to the Operation : For Interaction of Domestic Appliance (조작 방식에 따른 음성과 소리 피드백의 할당 방법 가전제품과의 상호작용을 중심으로)

  • Hong, Eun-ji;Hwang, Hae-jeong;Kang, Youn-ah
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.2
    • /
    • pp.15-22
    • /
    • 2016
  • The ways to interact with digital appliances are becoming more diverse. Users can control appliances using a remote control and a touch-screen, and appliances can send users feedback through various ways such as sound, voice, and visual signals. However, there is little research on how to define which output method to use for providing feedback according to the user' input method. In this study, we designed an experimental study that seeks to identify how to appropriately match the output method - voice and sound - based on the user input - voice and button. We made four types of interaction with two kinds input methods and two kinds of output methods. For the four interaction types, we compared the usability, perceived satisfaction, preference and suitability. Results reveals that the output method affects the ease of use and perceived satisfaction of the input method. The voice input method with sound feedback was evaluated more satisfying than with the voice feedback. However, the keying input method with voice feedback was evaluated more satisfying than with sound feedback. The keying input method was more dependent on the output method than the voice input method. We also found that the feedback method of appliances determines the perceived appropriateness of the interaction.

On a Pitch Alteration Method by Time-axis Scaling Compensated with the Spectrum for High Quality Speech Synthesis (고음질 합성용 스펙트럼 보상된 시간축조절 피치 변경법)

  • Bae, Myung-Jin;Lee, Won-Cheol;Im, Sung-Bin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.89-95
    • /
    • 1995
  • The waveform coding technique has concerned with simply preserving the waveform shape of speech signal through a redundancy reduction process. In the case of speech synthesis, the waveform coding with high sound quality is mainly used to the synthesis by analysis. However, since the parameters of this coding are not classified into either excitation or vocal tract parameters, it is difficult to applying the waveform coding to the synthesis by rule. In order to apply the waveform coding to the synthesis by rule, the pitch alteration technique is required in prosody control. In this paper, we propose a new pitch alteration method that can change the pitch period in waveform coding by scaling the time-axis and compensating the spectrum. This is relevant to the time-frequency domain method were the phase components of the waveform is preserved with a little spectrum distortion of 2.5 % and less for 50% pitch change.

  • PDF

A Case of Interpretation for Audiological Evaluation in Preschool Child with Mild-to-Moderately Severe Asymmetric Ski-Slop Sensorineural Hearing Loss (학령 전기 경도 및 중등고도 대칭성 고음급추형 감각신경성 난청의 청각학적 평가 해석 증례)

  • Kim, Na-Yeon;So, Won-Seop;Ha, Ji-Wan;Heo, Seung-Deok
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.1
    • /
    • pp.9-14
    • /
    • 2017
  • Preschool children to do production and acquisition of phonological system from birth to 8 years of age. If a child has hearing loss, he/she has a lot of difficulties to hear sound. The problem of auditory perception can causes limited speech acquisition, delayed language development, and communication disorders. It also affects learning, social and emotional development. Early detection and diagnosis of hearing loss are important for intervention. However, it may be difficult to detect if the degree of hearing loss are slight and/or it appears only on some frequencies. In cases of these kinds of hearing losses, it is often difficult to provide aural intervention. The goal of this study is to discuss the interpretation of audiological evaluation in case of mild-to-moderately severe asymmetric ski-slop sensorineural hearing loss, analyze communication problems, and concerning about audiological, and speech-language pathological rehabilitation.

Implementation of Korean Vowel 'ㅏ' Recognition based on Common Feature Extraction of Waveform Sequence (파형 시퀀스의 공통 특징 추출 기반 모음 'ㅏ' 인식 구현)

  • Roh, Wonbin;Lee, Jongwoo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.567-572
    • /
    • 2014
  • In recent years, computing and networking technologies have been developed, and the communication equipments have become smaller and the mobility has increased. In addition, the demand for easily-operated speech recognition has increased. This paper proposes method of recognizing the Korean phoneme 'ㅏ'. A phoneme is the smallest unit of sound, and it plays a significant role in speech recognition. However, the precise recognition of the phonemes has many obstacles since it has many variations in its pronunciation. This paper proposes a simple and efficient method that can be used to recognize a Korean vowel 'ㅏ'. The proposed method is based on the common features that are extracted from the 'ㅏ' waveform sequences, and this is simpler than when using the previous complex methods. The experimental results indicate that this method has a more than 90 percent accuracy in recognizing 'ㅏ'.

A Comparative Study of Relative Distances among English Front Vowels Produced by Korean and American Speakers (한국인과 미국인이 발화한 영어전설모음의 상대적 거리 비교)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.99-107
    • /
    • 2013
  • The purpose of this study is to examine the relative distances among English front vowels in a message produced by 47 Korean and American speakers in order to better instruct pronunciation skills of English vowels for Korean English learners. A Praat script was developed to collect the first and second formant values(F1 and F2) of eight words in each sound file which was recorded from an internet speech archive. Then, the Euclidean distances were measured between the three vowel pairs: [i-ɛ], [i-ɪ], and [ɛ-æ]. The first vowel pair [i-ɛ] was set as the reference from which the relative distances of the other two vowel pairs were measured in percent in order to compare the vowel sounds among speakers of different vocal tract lengths. Results show that F1 values of the front vowels produced by the Korean and American speakers increased from the high front vowel to the low front vowel wih differences among the groups. The Korean speakers generally produced the front vowels with smaller jaw openings than the American speakers did. Secondly, the relative distance of the high front vowel pair [i-ɪ] showed a significant difference between the Korean and American speakers while that of the low front vowel pair [ɛ-æ] showed a non-significant difference. Finally, the Korean speakers in the higher proficiency level produced front vowels with higher F1 values than those in the lower proficiency level. The author concluded that Korean speakers should produce the front high vowels distinctively by securing sufficient relative distance of the formant values. Further studies would be desirable to examine how strong the Korean speakers' English proficiency correlate with the relative distance of target words of comparable productions.

The Characteristics of the Vocalization of the Female News Anchors (여성 뉴스 앵커의 발성 특성 분석)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.7
    • /
    • pp.390-395
    • /
    • 2011
  • This paper covers the studies on common voice parameters through the voice analysis of female main news anchors on weekday evening by the station, and differences of relative voices and sounds among stations. To examine voice characteristics, 6 voice parameters were analyzed and it showed anchors of each station had distinctive characteristics of voices and phonations over all fields except the speech rate, and there were also differences in sound systems. As major analysis parameters, basic pitch, tone of the 1st formant and pitch ratio, level of closeness by pitch bandwidth, type of sentence closing through average pitch position within pitch bandwidth, average speech rate, and acoustic tone analysis by energy distribution by frequency band were used. Analyzed values and results could be referred to and utilized in the criteria of phonation characteristics for domestic female news anchors.

The Comparison of the Acoustic and Aerodynamic Characteristics of $PROVOX^{(R)}$ Voice and Esophageal Voice Produced by the Same Laryngectomee (동일 후적자가 산출하는 기관식도 발성($PROVOX^{(R)}$ 발성)과 식도 발성에 대한 음향학적 및 공기역학적 특성 비교)

  • Pyo, H.Y.;Choi, H.S.;Lim, S.E.;Choi, S.H.
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.121-139
    • /
    • 1999
  • Our experimental subject was a laryngectomee who had undergone total laryngectomy with $PROVOX^{(R)}$ insertion, and learned esophageal speech after the surgery, so he could produce both $PROVOX^{(R)}$ voice and esophageal voice. With this subject's production of $PROVOX^{(R)}$ and esophageal voice, we are to compare the acoustic and aerodynamic characteristics of the two voices, under the same physical conditions of the same person. As a result, the fundamental frequency of esophageal voice was 137.2 Hz, and that of $PROVOX^{(R)}$ was 97.5 Hz. $PROVOX^{(R)}$ voice showed lower jitter, shimmer and NHR than esophageal voice, which means that $PROVOX^{(R)}$ voice showed better voice quality than esophageal voice. In spectrographic analysis, the formation of formants and pseudoformants were more distinct in esophageal voice and several temporal aspects of acoutic features such as VOT and closure duration were more similar with normal voice in $PROVOX^{(R)}$ voice. During the sentence utterance, esophageal voice showed longer pause or silence duration than $PROVOX^{(R)}$ voice. Maximum phonation time and mean flow rate of $PROVOX^{(R)}$ voice were much longer and larger than esophageal voice, but mean and range of sound pressure level, subglottic pressure and voice efficiency were similar in the two voices. Glottal resistance of esophageal voice was much larger than $PROVOX^{(R)}$ voice which showed still larger glottal resistance than normal voice.

  • PDF

Implementation of TTS Engine for Natural Voice (자연음 TTS(Text-To-Speech) 엔진 구현)

  • Cho Jung-Ho;Kim Tae-Eun;Lim Jae-Hwan
    • Journal of Digital Contents Society
    • /
    • v.4 no.2
    • /
    • pp.233-242
    • /
    • 2003
  • A TTS(Text-To-Speech) System is a computer-based system that should be able to read any text aloud. To output a natural voice, we need a general knowledge of language, a lot of time, and effort. Furthermore, the sound pattern of english has a variable pattern, which consists of phonemic and morphological analysis. It is very difficult to maintain consistency of pattern. To handle these problems, we present a system based on phonemic analysis for vowel and consonant. By analyzing phonological variations frequently found in spoken english, we have derived about phonemic contexts that would trigger the multilevel application of the corresponding phonological process, which consists of phonemic and allophonic rules. In conclusion, we have a rule data which consists of phoneme, and a engine which economize in system. The proposed system can use not only communication system, but also utilize office automation and so on.

  • PDF