• Title/Summary/Keyword: Speech Rates

Search Result 272, Processing Time 0.033 seconds

Temporal Variation Due to Tense vs. Lax Consonants in Korean

  • Yun, II-Sung
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.23-36
    • /
    • 2004
  • Many languages show reverse durational variation between preceding vowel and following voiced/voiceless (lax/tense) consonants. This study investigated the likely effects of phoneme type (tense vs. lax) on the timing structure (duration of syllable, word, phrase and sentence) of Korean. Three rates of speech (fast, normal, slow) applied to stimuli with the target word /a-Ca/ where /C/ is one of /p, p', $p^h$/. The type (tense/lax) of /C/ caused marked inverse durational variations in the two syllables /a/ and /Ca/ and highly different durational ratios between them. Words with /p', $p^h$/ were significantly longer than that with /p/, which contrasts with many other languages where such pairs of words have a similar duration. The differentials between words remained up to the phrase and sentence level, but in general the higher linguistic units did not statistically differ within each level. Thus, the phrase is suggested as a compensatory unit of phoneme type effects in Korean. Different rates did not affect the general tendency. Distribution of time variations (from normal to fast and slow) to each syllable (/a/ and /Ca/) was also observed.

  • PDF

A Robust Speech Recognition Method Combining the Model Compensation Method with the Speech Enhancement Algorithm (음질향상 기법과 모델보상 방식을 결합한 강인한 음성인식 방식)

  • Kim, Hee-Keun;Chung, Yong-Joo;Bae, Keun-Seung
    • Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.115-126
    • /
    • 2007
  • There have been many research efforts to improve the performance of the speech recognizer in noisy conditions. Among them, the model compensation method and the speech enhancement approach have been used widely. In this paper, we propose to combine the two different approaches to further enhance the recognition rates in the noisy speech recognition. For the speech enhancement, the minimum mean square error-short time spectral amplitude (MMSE-STSA) has been adopted and the parallel model combination (PMC) and Jacobian adaptation (JA) have been used as the model compensation approaches. From the experimental results, we could find that the hybrid approach that applies the model compensation methods to the enhanced speech produce better results than just using only one of the two approaches.

  • PDF

Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command (음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현)

  • Shim, Byoung-Kyun;Han, Sung-Hyun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.2
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

Word-boundary and rate effects on upper and lower lip movements in the articulation of the bilabial stop /p/ in Korean

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.23-31
    • /
    • 2018
  • In this study, we examined how the upper and lower lips articulate to produce labial /p/. Using electromagnetic midsagittal articulography, we collected flesh-point tracking movement data from eight native speakers of Seoul Korean (five females and three males). Individual articulatory movements in /p/ were examined in terms of minimum vertical upper lip position, maximum vertical lower lip position, and corresponding vertical upper lip position aligned with maximum vertical lower lip position. Using linear mixed-effect models, we tested two factors (word boundary [across-word vs. within-word] and speech rate [comfortable vs. fast]) and their interaction, considering subjects as random effects. The results are summarized as follows. First, maximum lower lip position varied with different word boundaries and speech rates, but no interaction was detected. In particular, maximum lower lip position was lower (e.g., less constricted or more reduced) in fast rate condition and across-word boundary condition. Second, minimum lower lip position, as well as lower lip position, measured at the time of maximum lower lip position only varied with different word boundaries, showing that they were consistently lower in across-word condition. We provide further empirical evidence of lower lip movement sensitive to both different word boundaries (e.g., linguistic factor) and speech rates (e.g., paralinguistic factor); this supports the traditional idea that the lower lip is an actively moving articulator. The sensitivity of upper lip movement is also observed with different word boundaries; this counters the traditional idea that the upper lip is the target area, which presupposes immobility. Taken together, the lip aperture gesture is a good indicator that takes into account upper and lower lip vertical movements, compared to the traditional approach that distinguishes a movable articulator from target place. Respective of different speech rates, the results of the present study patterned with cross-linguistic lenition-related allophonic variation, which is known to be more sensitive to fast rate.

An aerodynamic and acoustic characteristics of Clear Speech in patients with Parkinson's disease (파킨슨 환자의 클리어 스피치 전후 음향학적 공기역학적 특성)

  • Shin, Hee Baek;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.67-74
    • /
    • 2017
  • An increase in speech intelligibility has been found in Clear Speech compared to conversational speech. Clear Speech is defined by decreased articulation rates and increased frequency and length of pauses. The objective of the present study was to investigate improvement in immediate speech intelligibility in 10 patients with Parkinson's disease (age range: 46 to 75 years) using Clear Speech. This experiment has been performed using the Phonatory Aerodynamic System 6600 after the participants read the first sentence of a Sanchaek passage and the "List for Adults 1" in the Sentence Recognition Test (SRT) using casual speech and Clear Speech. Acoustic and aerodynamic parameters that affect speech intelligibility were measured, including mean F0, F0 range, intensity, speaking rate, mean airflow rate, and respiratory rate. In the Sanchaek passage, use of Clear Speech resulted in significant differences in mean F0, F0 range, speaking rate, and respiratory rate, compared with the use of casual speech. In the SRT list, significant differences were seen in mean F0, F0 range, and speaking rate. Based on these findings, it is claimed that speech intelligibility can be affected by adjusting breathing and tone in Clear Speech. Future studies should identify the benefits of Clear Speech through auditory-perceptual studies and evaluate programs that use Clear Speech to increase intelligibility.

Variable Quad Rate ADPCM for Efficient Speech Transmission and Real Time Implementation on DSP (효율적인 음성신호의 전송을 위한 4배속 가변 변환율 ADPCM기법 및 DSP를 이용한 실시간 구현)

  • 한경호
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.18 no.1
    • /
    • pp.129-136
    • /
    • 2004
  • In this paper, we proposed quad variable rates ADPCM coding method for efficient speech transmission and real time porcessing is implemented on TMS320C6711-DSP. The modified ADPCM with four variable coding rates, 16[kbps], 24[kbps], 32[kbps] and 40[kbps] are used for speech window samples for good quality speech transmission at a small data bits and real time encoding and decoding is implemented using DSP. ZCR is used to identify the influence of the noise on the speech signal and to decide the rate change threshold. For noise superior signals, low coding rates are applied to minimize data bit and for noise inferior signals, high coding rates are applied to enhance the speech quality. In most speech telecommunications, silent period takes more than half of the signals, speech quality close to 40[kbps] can be obtained at comparabley low data bits and this is shown by simulation and experiments. TMS320C6711-DSK board has 128K flash memory and performance of 1333MIPS and has meets the requirements for real time implementation of proposed coding algorithm.

A Comparative Study on the Speech Rate of Advanced Korean(L2) Learners and Korean Native Speakers in Conversational Speech (자유 대화에서의 한국어 원어민 화자와 한국어 고급 학습자들의 발화 속도 비교)

  • Hong, Minkyoung
    • Journal of Korean language education
    • /
    • v.29 no.3
    • /
    • pp.345-363
    • /
    • 2018
  • The purpose of this study is to compare the speech rate of advanced Korean(L2) learners and Korean native speakers in spontaneous utterances. Specifically, the current study investigated the difference of the two groups' speech pattern according to utterance length. Eight advanced Korean(L2) learners and eight Korean native speakers participated in this study. The data were collected by recording their conversation and physical measurements (speaking rate, articulatory rates, pause and several types of speech disfluency) were taken on extracted 120 utterances from 12 out of the 16 participants. The findings show that advanced Korean learners' speech pattern is similar to that of Koreans in the short-length utterance. However, in the long-length utterance, two groups show different speech patterns; while the articulatory rate of Korean native speakers increased in the long-length utterance, that of Korean learners decreased. This suggests that the frequency of speech disfluency factors might affect this result.

A study on the voice onset times of the Seoul Corpus males in their twenties (서울 코퍼스 20대 남성의 성대진동 개시시간 연구)

  • Lee, Yuri;Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.1-8
    • /
    • 2016
  • The purpose of this work is to examine the voice onset times (VOTs) of the three types of plosives from the Seoul Corpus male speakers in their twenties. In addition, the factors known to affect VOTs were analyzed, including the place and manner of articulation, speakers, location in words, type of following vowels and speech rates calculated from the three consecutive words. Much of the findings agreed with those from earlier studies on Korean and other languages and new discoveries were made.

Multi-channel input-based non-stationary noise cenceller for mobile devices (이동형 단말기를 위한 다채널 입력 기반 비정상성 잡음 제거기)

  • Jeong, Sang-Bae;Lee, Sung-Doke
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.945-951
    • /
    • 2007
  • Noise cancellation is essential for the devices which use speech as an interface. In real environments, speech quality and recognition rates are degraded by the auditive noises coming near the microphone. In this paper, we propose a noise cancellation algorithm using stereo microphones basically. The advantage of the use of multiple microphones is that the direction information of the target source could be applied. The proposed noise canceller is based on the Wiener filter. To estimate the filter, noise and target speech frequency responses should be known and they are estimated by the spectral classification in the frequency domain. The performance of the proposed algorithm is compared with that of the well-known Frost algorithm and the generalized sidelobe canceller (GSC) with an adaptation mode controller (AMC). As performance measures, the perceptual evaluation of speech quality (PESQ), which is the most widely used among various objective speech quality methods, and speech recognition rates are adopted.

Performance Evaluation of Frame Erasure Concealment Algorithms in VoIP Coders (VoIP 코더들의 프레임손실은닉 알고리즘 성능평가)

  • Han, Seung-Ho;Moon, Kwang;Han, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.235-238
    • /
    • 2004
  • Frame erasures cause speech quality degradation in wireless communication networks or packet networks. The degradation becomes worse when consecutive frame erasures occur. Speech coders have a frame erasure concealment(FEC) mechanism to compensate for frame erasures. It is meaningful to evaluate the performance of FEC mechanisms for frame erasures that occur in communications networks. In this paper, various frame erasures are designed. And the FEC algorithms of speech coders are evaluated and analyzed with the Perceptual Evaluation of Speech Quality(PESQ). It is found that the performances vary in accordance with frame erasure types, frame erasure rates, and utterance lengths.

  • PDF