• 제목/요약/키워드: Speech rate

검색결과 1,242건 처리시간 0.035초

음성인식프로그램을 이용한 무후두 음성의 말 명료도와 병적 음성의 수술 전후 개선도 측정 (Speech Intelligibility of Alaryngeal Voices and Pre/Post Operative Evaluation of Voice Quality using the Speech Recognition Program(HUVOIS))

  • 김한수;최성희;김재인;임재열;최홍식
    • 대한후두음성언어의학회지
    • /
    • 제15권2호
    • /
    • pp.92-97
    • /
    • 2004
  • Background and Objectives : The purpose of this study was to examine objectively pre and post operative voice quality evaluation and intelligibility of alaryngeal voice using speech recognition program, HUVOIS. Materials and Methods : 2 laryngologists and 1 speech pathologist were evaluated 'G', 'R', 'B' in the GRBAS sclae and speech intelligibility using NTID rating scale from standard paragraph. And also acoustic estimates such as jitter, shimmer, HNR were obtained from Lx Speech Studio. Results : Speech recognition rate was not significantly different between pre and post operation for pathological vocie samples though voice quality(G, B) and acoustic values(Jitter, HNR) were significantly improved after post operation. In Alaryngeal voices, reed type electrolarynx 'Moksori' was the highest both speech intelligibility and speech recognition rate, whereas esophageal speech was the lowest. Coefficient correlation of speech intelligibility and speech recognition rate was found in alaryngeal voices, but not in pathological voices. Conclusion : Current study was not proved speech recognition program, HUVOIS during telephone program was not objective and efficient method for assisting subjective GRBAS scale.

  • PDF

음성 패킷을 이용한 채널의 에러 정보 전달 (Transmission of Channel Error Information over Voice Packet)

  • 박호종;차성호
    • 한국음향학회지
    • /
    • 제21권4호
    • /
    • pp.394-400
    • /
    • 2002
  • 디지털 음성 통신에서 송신하는 음성 패킷의 전송 에러율을 알면 송신 채널 상황에 적합한 압축 동작을 통하여 전체 통신의 품질을 향상시킬 수 있다. 그러나 현재의 이동통신과 인터넷 통신에서는 음성 패킷의 전송 에러정보를 알려주는 프로토콜이 지원되지 않는다. 본 논문에서는 이를 해결하기 위하여 채널의 전송 에러 정보를 음성 패킷에 삽입하여 실시간으로 전달하는 방법을 제안한다. 제안하는 채널 에러 정보 삽입 방법은 ACELP (algebraic code-excited linear predictin) 코드벡터의 펄스 위치의 상관 관계를 이용하며, 이를 통하여 추가정보 삽입에 의한 음질 저하를 막고 오인식율을 줄일 수 있다. 다양한 음성 데이터를 이용하여 제안한 방법의 성능을 측정하였으며 음질의 저하가 거의 발생하지 않고 정보의 검출 능력과 오인식율에서 만족할 만한 성능을 가지는 것을 확인하였다.

발표 담화에서의 한국어 모어 화자와 한국어 학습자의 말하기 유창성 비교 연구 -발화 속도, 휴지, 담화표지를 중심으로- (A Comparative Study on Oral Fluency Between Korean Native Speakers and L2 Korean Learners in Speech Discourse - With Focus on Speech Rate, Pause, and Discourse Markers)

  • 이진;정진경
    • 한국어교육
    • /
    • 제29권4호
    • /
    • pp.137-168
    • /
    • 2018
  • The purpose of this study is to prepare the basis for a more objective evaluation of oral fluency by comparing speech patterns of Korean native speakers and L2 Korean learners. For this purpose, the current study focused on the analysis of speech materials of the 21st century Sejong spoken corpus and Korean learner corpus. We compared the oral fluency of Korean native speakers and Korean learners based on speech rate, pause, and discourse markers. The results show that the pattern of Korean learners is different to that of Korean native speakers in all aspects of speech rate, pause, and discourse markers; even though proficiency of Korean leaners show increase, they could not reach the oral fluency level of Korean native speakers. At last, based on these results of the analysis, we added suggestions for setting the evaluation criteria of oral fluency of Korean learners.

주파수 변화율을 이용한 음성과 음악의 구분 (Speech and Music Discrimination Using Spectral Transition Rate)

  • 양경철;방용찬;조선호;육동석
    • 한국음향학회지
    • /
    • 제28권3호
    • /
    • pp.273-278
    • /
    • 2009
  • 주파수 분석을 통해 음성과 음악의 특성을 살펴보면, 대부분 악기는 특정 주파수 소리를 지속적으로 내도록 고안되어 있다는 것을 알 수 있고, 음성은 조음 현상에 의해서 점차적인 주파수 변화가 발생하는 것을 알 수 있다. 본 논문에서는 이러한 음성과 음악이 갖고 있는 주파수 변화 특성을 이용하여 음성과 음악을 구별하는 방법을 제안한다. 즉, 음성과 음악을 구분해 주는 특성 값으로서 주파수 변화율을 사용하고자 한다. 제안한 주파수 변화율인 STR (spectral transition rate) 기반의 SMD (speech music discrimination) 실험 결과, 기존의 알고리즘보다 빠른 응답 속도에서 상대적으로 높은 성능을 보임을 알 수 있었다.

한국 표준어 연속음성에서의 억양구와 강세구 자동 검출 (Automatic Detection of Intonational and Accentual Phrases in Korean Standard Continuous Speech)

  • 이기영;송민석
    • 음성과학
    • /
    • 제7권2호
    • /
    • pp.209-224
    • /
    • 2000
  • This paper proposes an automatic detection method of intonational and accentual phrases in Korean standard continuous speech. We use the pause over 150 msec for detecting intonational phrases, and extract accentual phrases from the intonational phrases by analyzing syllables and pitch contours. The speech data for the experiment are composed of seven male voices and two female voices which read the texts of the fable 'the ant and the grasshopper' and a newspaper article 'manmulsang' in normal speed and in Korean standard variation. The results of the experiment shows that the detection rate of intonational phrases is 95% on the average and that of accentual phrases is 73%. This detection rate implies that we can segment the continuous speech into smaller units(i.e. prosodic phrases) by using the prosodic information and so the objects of speech recognition can narrow down to words or phrases in continuous speech.

  • PDF

A Fixed Rate Speech Coder Based on the Filter Bank Method and the Inflection Point Detection

  • Iem, Byeong-Gwan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제16권4호
    • /
    • pp.276-280
    • /
    • 2016
  • A fixed rate speech coder based on the filter bank and the non-uniform sampling technique is proposed. The non-uniform sampling is achieved by the detection of inflection points (IPs). A speech block is band passed by the filter bank, and the subband signals are processed by the IP detector, and the detected IP patterns are compared with entries of the IP database. For each subband signal, the address of the closest member of the database and the energy of the IP pattern are transmitted through channel. In the receiver, the decoder recovers the subband signals using the received addresses and the energy information, and reconstructs the speech via the filter bank summation. As results, the coder shows fixed data rate contrary to the existing speech coders based on the non-uniform sampling. Through computer simulation, the usefulness of the proposed technique is confirmed. The signal-to-noise ratio (SNR) performance of the proposed method is comparable to that of the uniform sampled pulse code modulation (PCM) below 20 kbps data rate.

다화자잡음이 말더듬의 비율과 말속도에 미치는 영향 (The Noise Effect on Stuttering and Overall Speech Rate: Multi-talker Babble Noise)

  • 박진;정인기
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.121-126
    • /
    • 2012
  • This study deals with how stuttering changes in its frequency in a situation where adult participants who stutter are exposed to one type of background noise, that is, multi-talker babble noise. Eight American English-speaking adults who stutter participated in this study. Each of the subjects read aloud sentences under each of three speaking conditions (i.e., typical solo reading (TSR), typical choral reading (TCR), and multi-talker babble noise reading (BNR)). Speech fluency was computed based on a percentage of syllables stuttered (%SS) and speaking rate was also assessed to examine if there was significant change in rates as a measure of vocal change under each of the speaking conditions. The study found that participants read more fluently both during BNR and during TCR than during TSR. The study also found that participants did not show significant changes in speaking rate across the three speaking conditions. Some discussion was provided in relation to the effect of multi-talker babble noise on the frequency of stuttering and its further speculation.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

음성의 묵음구간 검출을 통한 DTW의 성능개선에 관한 연구 (A Study on the Improvement of DTW with Speech Silence Detection)

  • 김종국;조왕래;배명진
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.117-124
    • /
    • 2003
  • Speaker recognition is the technology that confirms the identification of speaker by using the characteristic of speech. Such technique is classified into speaker identification and speaker verification: The first method discriminates the speaker from the preregistered group and recognize the word, the second verifies the speaker who claims the identification. This method that extracts the information of speaker from the speech and confirms the individual identification becomes one of the most efficient technology as the service via telephone network is popularized. Some problems, however, must be solved for the real application as follows; The first thing is concerning that the safe method is necessary to reject the imposter because the recognition is not performed for the only preregistered customer. The second thing is about the fact that the characteristic of speech is changed as time goes by, So this fact causes the severe degradation of recognition rate and the inconvenience of users as the number of times to utter the text increases. The last thing is relating to the fact that the common characteristic among speakers causes the wrong recognition result. The silence parts being included the center of speech cause that identification rate is decreased. In this paper, to make improvement, We proposed identification rate can be improved by removing silence part before processing identification algorithm. The methods detecting speech area are zero crossing rate, energy of signal detect end point and starting point of the speech and process DTW algorithm by using two methods in this paper. As a result, the proposed method is obtained about 3% of improved recognition rate compare with the conventional methods.

  • PDF

로컬 프레임 속도 변경에 의한 데이터 증강을 이용한 트랜스포머 기반 음성 인식 성능 향상 (Improving transformer-based speech recognition performance using data augmentation by local frame rate changes)

  • 임성수;강병옥;권오욱
    • 한국음향학회지
    • /
    • 제41권2호
    • /
    • pp.122-129
    • /
    • 2022
  • 본 논문은 프레임 속도를 국부적으로 조절하는 데이터 증강을 이용하여 트랜스포머 기반 음성 인식기의 성능을 개선하는 방법을 제안한다. 먼저, 원래의 음성데이터에서 증강할 부분의 시작 시간과 길이를 랜덤으로 선택한다. 그 다음, 선택된 부분의 프레임 속도는 선형보간법을 이용하여 새로운 프레임 속도로 변경된다. 월스트리트 저널 및 LibriSpeech 음성데이터를 이용한 실험결과, 수렴 시간은 베이스라인보다 오래 걸리지만, 인식 정확도는 대부분의 경우에 향상됨을 보여주었다. 성능을 더욱 향상시키기 위하여 변경 부분의 길이 및 속도 등 다양한 매개변수를 최적화하였다. 제안 방법은 월스트리트 저널 및 LibriSpeech 음성 데이터에서 베이스라인과 비교하여 각각 11.8 % 및 14.9 %의 상대적 성능 향상을 보여주는 것으로 나타났다.