• Title/Summary/Keyword: Speech sound

Search Result 628, Processing Time 0.024 seconds

Search for Optimal Data Augmentation Policy for Environmental Sound Classification with Deep Neural Networks (심층 신경망을 통한 자연 소리 분류를 위한 최적의 데이터 증대 방법 탐색)

  • Park, Jinbae;Kumar, Teerath;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.854-860
    • /
    • 2020
  • Deep neural networks have shown remarkable performance in various areas, including image classification and speech recognition. The variety of data generated by augmentation plays an important role in improving the performance of the neural network. The transformation of data in the augmentation process makes it possible for neural networks to be learned more generally through more diverse forms. In the traditional field of image process, not only new augmentation methods have been proposed for improving the performance, but also exploring methods for an optimal augmentation policy that can be changed according to the dataset and structure of networks. Inspired by the prior work, this paper aims to explore to search for an optimal augmentation policy in the field of sound data. We carried out many experiments randomly combining various augmentation methods such as adding noise, pitch shift, or time stretch to empirically search which combination is most effective. As a result, by applying the optimal data augmentation policy we achieve the improved classification accuracy on the environmental sound classification dataset (ESC-50).

The perception and production of Korean vowels by Egyptian learners (이집트인 학습자의 한국어 모음 지각과 산출)

  • Benjamin, Sarah;Lee, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.23-34
    • /
    • 2021
  • This study aims to discuss how Egyptian learners of Korean perceive and categorize Korean vowels, how Koreans perceive Korean vowels they pronounce, and how Egyptian learners' Korean vowel categorization affects their perception and production of Korean vowels. In Experiment 1, 53 Egyptian learners were asked to listen to Korean test words pronounced by Koreans and choose the words they had listened to among 4 confusable words. In Experiment 2, 117 sound files (13 test words×9 Egyptian learners) recorded by Egyptian learners were given to Koreans and asked to select the words they had heard among 4 confusable words. The results of the experiments show that "new" Korean vowels that do not have categorizable ones in Egyptian Arabic easily formed new categories and were therefore well identified in perception and relatively well pronounced, but some of them were poorly produced. However, Egyptian learners poorly distinguished "similar" Korean vowels in perception, but their pronunciation was relatively well identified by native Koreans. Based on the results of this study, we argued that the Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM) explain the L2 speech perception well, but they are insufficient to explain L2 speech production and therefore need to be revised and extended to L2 speech production.

Spectral moment analysis of distortion errors in alveolar fricatives in Korean children (치조 마찰음 왜곡 오류 유무에 따른 아동 발화 적률분석 비교)

  • Yunju Han;Do Hyung Kim;Ja Eun Hwang;Dae-Hyun Jang;Jae Won Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.33-40
    • /
    • 2024
  • This study investigated acoustic features in spectral moment analysis, comparing accurate articulations with distortions of alveolar fricatives such as dentalization, palatalization, and lateralization. A retrospective analysis was conducted on speech samples from 61 children (mean age: 5.6±1.5 years, 19 females, 42 males) using the Assessment of Phonology & Articulation for Children (APAC) and Urimal-test of Articulation and Phonology I (U-TAP I). Spectral moment analysis was applied to 169 speech samples. The results revealed that the center of gravity of accurate articulations was higher than that of palatalization, while palatalization was lower than dentalization. The variance of dentalization was higher than that of both accurate articulations and palatalization. The skewness of dentalization was higher than that of accurate articulations, and the skewness of palatalization was higher than that of accurate articulations. The kurtosis of palatalization was higher than that of both accurate articulations and dentalization. No significant differences were observed for the position of fricatives (initial, medial) and tense type (plain, tense) across all variables of spectral moment analysis for each distortion type. This study confirmed distinct patterns in center of gravity, variance, skewness, and kurtosis depending on the type of alveolar fricative distortion. The objective values provided in this study will serve as foundational data for diagnosing alveolar fricative distortions in children with speech sound disorders.

An Audio-Visual Teaching Aid (AVTA) with Scrolling Display and Speech to Text over the Internet

  • Davood Khalili;Chung, Wan-Young
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2649-2652
    • /
    • 2003
  • In this Paper, an Audio-Visual Teaching aid (AVTA) for use in a classroom and with Internet is presented. A system, which was designed and tested, consists of a wireless Microphone system, Text to Speech conversion Software, Noise filtering circuit and a Computer. An IBM compatible PC with sound card and Network Interface card and a Web browser and a voice and text messenger service were used to provide slightly delayed text and also voice over the internet for remote teaming, while providing scrolling text from a real time lecture in a classroom. The motivation for design of this system, was to aid Korean students who may have difficulty in listening comprehension while have, fairly good reading ability of text. This application of this system is twofold. On one hand it will help the students in a class to view and listen to a lecture, and on the other hand, it will serve as a vehicle for remote access (audio and text) for a classroom lecture. The project provides a simple and low cost solution to remote learning and also allows a student to have access to classroom in emergency situations when the student, can not attend a class. In addition, such system allows the student in capturing a teacher's lecture in audio and text form, without the need to be present in class or having to take many notes. This system will therefore help students in many ways.

  • PDF

A Study on Stable Motion Control of Humanoid Robot with 24 Joints Based on Voice Command

  • Lee, Woo-Song;Kim, Min-Seong;Bae, Ho-Young;Jung, Yang-Keun;Jung, Young-Hwa;Shin, Gi-Soo;Park, In-Man;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.1
    • /
    • pp.17-27
    • /
    • 2018
  • We propose a new approach to control a biped robot motion based on iterative learning of voice command for the implementation of smart factory. The real-time processing of speech signal is very important for high-speed and precise automatic voice recognition technology. Recently, voice recognition is being used for intelligent robot control, artificial life, wireless communication and IoT application. In order to extract valuable information from the speech signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. Basic method used for extracting the features of the voice signal is to find the Mel frequency cepstral coefficients. Mel-frequency cepstral coefficients are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The reliability of voice command to control of the biped robot's motion is illustrated by computer simulation and experiment for biped walking robot with 24 joint.

An Acoustic Study on the Pronunciation of English [kwJ Sequences by Korean EFL Students

  • Kim, Jung-Eun;Cho, Mi-Hui
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.193-206
    • /
    • 2002
  • The aim of this study is to find out how the labiovelar onglide /w/ in English kwV sequences that have minimal pairs with kV sequences is pronounced differently among Korean EFL learners based on acoustic evidence. This study tries to identify /w/ sound in English kwV sequences through spectrograms and to examine the duration ratios of each segment in kwV words to compare the patterns of an English native speaker with those of Korean speakers of English. In spectrographic analyses, the complete deletion of /w/ and partial pronunciation of /w/ dubbed [$k^{w}$] were identified as well as the targetappropriate production of /w/. The general production patterns with respect to the duration ratios in English [kw] sequence words showed that the subjects who produced /w/ had similar ratio patterns that the native speaker had in that the vowel duration ratio in kwV sequences was shorter than that in kV sequences. By contrast, the subjects who deleted [w] had a long ratio of the onset [$k^{h}$] while the speaker with a partial pronunciation of /w/ had a long ratio of the following vowel.

  • PDF

A Study on Speaker Identification Parameter Using Difference and Correlation Coeffieicent of Digit_sound Spectrum (숫자음의 스펙트럼 차이값과 상관계수를 이용한 화자인증 파라미터 연구)

  • Lee, Hoo-Dong;Kang, Sun-Mee;Chang, Moon-Soo;Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.131-142
    • /
    • 2004
  • Speaker identification system basically functions by comparing spectral energy of an individual production model with that of an input signal. This study aimed to develop a new speaker identification system from two parameters from the spectral energy of numeric sounds: difference sum and correlation coefficient. A narrow-band spectrogram yielded more stable spectral energy across time than a wide-band one. In this paper, we collected empirical data from four male speakers and tested the speaker identification system. The subjects produced 18 combinations of three-digit numeric. sounds !en times each. Five productions of each three-digit number were statistically averaged to make a model for each speaker. Then, the remaining five productions were tested on the system. Results showed that when the threshold for the absolute difference sum was set to 1200, all the speakers could not pass the system while everybody could pass if set to 2800. The minimum correlation coefficient to allow all to pass was 0.82 while the coefficient of 0.95 rejected all. Thus, both threshold levels can be adjusted to the need of speaker identification system, which is desirable for further study.

  • PDF

Development of Electrical Stimulator for Auditory Stimulation (청각 자극용 전기자극기 개발)

  • Heo, Seung-Deok;Jung, Dong-Keun;Kim, Lee-Suk;Kim, Gwang-Nyeon;Kang, Myung-Koo;Kim, Jae-Ryong;Kim, Gi-Ryon
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.201-211
    • /
    • 2004
  • This paper introduces a development of an electrical stimulator for auditory stimulation. The electrical stimulator is useful in neurotological diagnosis, audiological evaluation, candidate selection for cochlear implantation, optimal device selection and decision making of MAP strategy for severe-to-profound hearing impaired persons. The development was based on sound parameters of auditory brainstem responses and auditory electrophysiological characteristic such as effective firing of auditory nerve and recording evoked potentials during refractory period of neuron. Besides pulse parameter could adjustable by programming for more varied electrical stimulation evoked response audiometry. Using the electrical stimulator, electrical square pulse was applied to promontory, and electrically evoked auditory brainstem response and electrically middle latency response were successfully recorded in cats.

  • PDF

A Study of English Consonants Identified by College Students (대학생들의 영어자음 인지 연구)

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.139-151
    • /
    • 2005
  • Previous studies have shown that Korean students have difficulty identifying some English consonants which are not in the Korean sound inventory. The aim of this study was to examine the accuracy rate of English consonants correctly identified by 130 college students in order to find out which English consonants were difficult for the students to perceive. The subject's task was to identify one of the minimal pairs played in a quiet laboratory classroom. 100 minimal pairs consisted of syllables with various onsets or codas: stops, fricatives, affricates, liquids and nasals. Results were as follows: First, the average score of the English major group was significantly higher than that of the non-English major group. Second, there was a similar distribution in the rank order of minimal pairs sorted by the accuracy rate between the two groups. Third, the accuracy rate systematically decreased as each score range decreased. Fourth, the students showed higher accuracy in the perception of liquids than that of the stop-fricative contrast. Fifth, the accuracy score in onset position was higher than in coda position. Finally, the students still had problem telling voiced consonants from voiceless ones, especially in coda position. It would be desirable to extend the present research to middle or high school students to fundamentally resolve those listening problems.

  • PDF

An Acoustic Study of Korean and English Voiceless Sibilant Fricatives

  • Sung, Eun-Kyung;Cho, Yun-Jeong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.37-46
    • /
    • 2010
  • This study investigates acoustic characteristics of English and Korean voiceless sibilant fricatives as they appear before the three vowels, /i/, /$\alpha$/ and /u/. Three measurements - duration, center of gravity and major spectral peak - are employed to compare acoustic properties and vowel effect for each fricative sound. This study also investigates the question of whether Korean sibilant fricatives are acoustically similar to the English voiceless alveolar fricative /s/ or to the palato-alveolar /$\int$/. The results show that in the duration of frication noise, English /$\int$/ is the longest and Korean lax /s/ the shortest of the four sounds. It is also observed that English alveolar /s/ has the highest value, whereas Korean /s/ shows the lowest value in the frequency of center of gravity. In terms of major spectral peak, while English /s/ reveals the highest frequency, English /$\int$/ shows the lowest value. In addition, evidence indicates that there is a strong vowel effect in the fricative sounds of both languages, although the vowel effect patterns of the two languages are inconsistent. For instance, in the major spectral peak, both Korean lax /s/ and tense /$s^*$/ show significantly higher frequencies before the vowel /$\alpha$/ than before the other vowels, whereas both English /s/ and /$\int$/ exhibit significantly higher frequencies before the vowel /i/ than before the other vowels. These results indicate that Korean sibilant fricatives are acoustically distinct from both English /s/ and /$\int$/.

  • PDF