• 제목/요약/키워드: Automatic Speech Analysis

검색결과 74건 처리시간 0.027초

음성인식 기반 응급상황관제 (Emergency dispatching based on automatic speech recognition)

  • 이규환;정지오;신대진;정민화;강경희;장윤희;장경호
    • 말소리와 음성과학
    • /
    • 제8권2호
    • /
    • pp.31-39
    • /
    • 2016
  • In emergency dispatching at 119 Command & Dispatch Center, some inconsistencies between the 'standard emergency aid system' and 'dispatch protocol,' which are both mandatory to follow, cause inefficiency in the dispatcher's performance. If an emergency dispatch system uses automatic speech recognition (ASR) to process the dispatcher's protocol speech during the case registration, it instantly extracts and provides the required information specified in the 'standard emergency aid system,' making the rescue command more efficient. For this purpose, we have developed a Korean large vocabulary continuous speech recognition system for 400,000 words to be used for the emergency dispatch system. The 400,000 words include vocabulary from news, SNS, blogs and emergency rescue domains. Acoustic model is constructed by using 1,300 hours of telephone call (8 kHz) speech, whereas language model is constructed by using 13 GB text corpus. From the transcribed corpus of 6,600 real telephone calls, call logs with emergency rescue command class and identified major symptom are extracted in connection with the rescue activity log and National Emergency Department Information System (NEDIS). ASR is applied to emergency dispatcher's repetition utterances about the patient information. Based on the Levenshtein distance between the ASR result and the template information, the emergency patient information is extracted. Experimental results show that 9.15% Word Error Rate of the speech recognition performance and 95.8% of emergency response detection performance are obtained for the emergency dispatch system.

구개열 환자 발음 판별을 위한 특징 추출 방법 분석 (Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients)

  • 김성민;김우일;권택균;성명훈;성미영
    • 정보과학회 논문지
    • /
    • 제42권11호
    • /
    • pp.1372-1379
    • /
    • 2015
  • 본 논문에서는 구개열 환자의 장애 발음과 정상인의 발음을 자동으로 구분하여 판별하는데 사용될 수 있는 특징 추출 방법들의 성능을 분석하는 실험에 대하여 소개한다. 이 연구는 발성 장애인의 복지 향상을 추구하며 수행하고 있는 장애 음성 자동 인식 및 복원 소프트웨어 시스템 개발의 기초과정이다. 실험에 사용된 음성 데이터는 정상인의 발음, 구개열 환자의 발음, 그리고 모의 환자의 발음의 세 그룹으로부터 수집된 한국어 단음절로서 14개의 기본 자음과 5개의 복합 자음, 7개 모음이다. 발음의 특징 추출은 LPCC, MFCC, PLP의 세 가지 방법으로 각각 수행하였고, GMM 음향 모델로 인식 훈련을 한 후, 수집된 단음절 데이터를 대상으로 하여 인식 실험을 실시하였다. 실험 결과, 정상인과 구개열 환자의 장애 발음을 구별하기 위하여 특징을 추출함에 있어서 MFCC 방법이 전반적으로 가장 우수하였다. 본 연구의 결과는 구개열 환자의 부정확한 발음을 자동으로 인식하고 복원하는 연구와 구개열 장애 발음의 정도를 측정할 수 있는 도구에 대한 연구에 도움이 될 것으로 기대된다.

4세 말소리발달 선별검사 개발과 한국어말소리분석도구(Korean Speech Sound Analysis Tool, KSAT)의 활용 (Developing the speech screening test for 4-year-old children and application of Korean speech sound analysis tool (KSAT))

  • 김수진;장기완;장문수
    • 말소리와 음성과학
    • /
    • 제16권1호
    • /
    • pp.49-55
    • /
    • 2024
  • 본 연구는 4세 아동에 대한 말소리발달 평가를 위해 세 문장 따라말하기 선별검사를 개발하고 또래와 비교할 수 있는 규준을 제공하기 위한 것이다. 이를 위해 4세 전반과 후반 각각 24명씩 총 48명의 아동에게 선별검사를 실시하였다. 선별검사 결과는 기존의 말소리장애 평가 검사 결과와 .7의 상관을 보였다. 선별검사를 통해 구한 음운발달 지표와 오류패턴에서 4세 전반과 후반으로 나눈 두 집단에 차이가 있는지 비교하였다. 후반 아동의 발달지표가 높은 것으로 나왔지만 통계적으로 유의한 차이는 없었다. 모든 분석은 한국어말소리분석도구(Korean Speech Sound Analysis Tool, KSAT)를 사용하였으며, 자동분석 결과와 임상가의 수동분석 내용을 비교하였다. 자동분석과 수동분석의 오류패턴분석 일치도는 93.63%였다. 본 연구의 의의는 유도 문장수준에서 세 문장 따라말하기 선별검사의 4세 아동의 말소리 규준을 제시했다는 것과 KSAT의 임상과 연구 현장에서 적용 가능성을 검토하였다는 것이다.

파킨슨병 환자 대상 조음교대운동의 음향적 분석 (An Acoustic Analysis of Diadochokinesis in Patients with Parkinson's Disease)

  • 강영애;박현영;구본석
    • 말소리와 음성과학
    • /
    • 제5권4호
    • /
    • pp.3-15
    • /
    • 2013
  • The acoustic analysis of diadochokinesis(DDK) has been used to evaluate dysarthria. However, there has not been an automatic method to evaluate dysarthria. The aim of this study was to introduce a new automated program to measure DDK tasks and to apply this to clinical patients with idiopathic Parkinson's disease(IPD). Fourty-seven patients with IPD and a healthy control group of twenty participants were selected with every DDK task recorded three times. Twenty-five acoustic parameters in the program were developed. The relevant parameters were times of DDK, pitch related parameters, intensity parameters which were analyzed by 2-way ANOVA. Significant differences between the groups were found in the times of DDK, pitch related parameters, and intensity parameters. The findings indicated that the pitch of control group was more stable than that of the IPD. Even though the patients with IPD had a higher intensity value, this phenomenon was caused by the weakness of the IPD group who could not control their speech with a breath.

영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석 (An acoustical analysis of synchronous English speech using automatic intonation contour extraction)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

화자식별 기반의 AI 음성인식 서비스에 대한 사이버 위협 분석 (Cyber Threats Analysis of AI Voice Recognition-based Services with Automatic Speaker Verification)

  • 홍천호;조영호
    • 인터넷정보학회논문지
    • /
    • 제22권6호
    • /
    • pp.33-40
    • /
    • 2021
  • 음성인식(ASR: Automatic Speech Recognition)은 사람의 말소리를 음성 신호로 분석하고, 문자열로 자동 변화하여 이해하는 기술이다. 초기 음성인식 기술은 하나의 단어를 인식하는 것을 시작으로 두 개 이상의 단어로 구성된 문장을 인식하는 수준까지 진화하였다. 실시간 음성 대화에 있어 높은 인식률은 자연스러운 정보전달의 편리성을 극대화하여 그 적용 범위를 확장하고 있다. 반면에, 음성인식 기술의 활발한 적용에 따라 관련된 사이버 공격과 위협에 대한 우려 역시 증가하고 있다. 기존 연구를 살펴보면, 자동화자식별(ASV: Automatic Speaker Verification) 기법의 고안과 정확성 향상 등 기술 발전 자체에 관한 연구는 활발히 이루어지고 있으나, 실생활에 적용되고 있는 음성인식 서비스의 자동화자 식별 기술에 대한 사이버 공격 및 위협에 관한 분석연구는 다양하고 깊이 있게 수행되지 않고 있다. 본 연구에서는 자동화자 식별 기술을 갖춘 AI 음성인식 서비스를 대상으로 음성 주파수와 음성속도를 조작하여 음성인증을 우회하는 사이버 공격 모델을 제안하고, 상용 스마트폰의 자동화자 식별 체계를 대상으로 실제 실험을 통해 사이버 위협을 분석한다. 이를 통해 관련 사이버 위협의 심각성을 알리고 효과적인 대응 방안에 관한 연구 관심을 높이고자 한다.

텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구 (A Study of Speech Control Tags Based on Semantic Information of a Text)

  • 장문수;정경채;강선미
    • 음성과학
    • /
    • 제13권4호
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Shapes of Vowel F0 Contours Influenced by Preceding Obstruents of Different Types - Automatic Analyses Using Tilt Parameters-

  • Jang, Tae-Yeoub
    • 음성과학
    • /
    • 제11권1호
    • /
    • pp.105-116
    • /
    • 2004
  • The fundamental frequency of a vowel is known to be affected by the identity of the preceding consonant. The general agreement is that strong consonants trigger higher F0 than weak consonants. However, there has been a disagreement on the shape of this segmentally affected F0 contours. Some studies report that shapes of contours are differentiated based on the consonant type, but others regard this observation as misleading. This research attempts to resolve this controversy by investigating shapes and slopes of F0 contours of Korean word level speech data produced by four male speakers. Instead of entirely relying on traditional human intuition and judgment, I employed an automatic F0 contour analysis technique known as tilt parameterisation (Taylor 2000). After necessary manipulation of an F0 contour of each data token, various parameters are collapsed into a single tilt value which directly indicates the shape of the contour. The result, in terms of statistical inference, shows that it is not viable to conclude that the type of consonant is significantly related to the shape of F0 contour. A supplementary measurement is also made to see if the slope of each contour bears meaningful information. Unlike shapes themselves, slopes are suspected to be practically more practical for consonantal differentiation, although confirmation is required through further refined experiments.

  • PDF

Intra-and Inter-frame Features for Automatic Speech Recognition

  • Lee, Sung Joo;Kang, Byung Ok;Chung, Hoon;Lee, Yunkeun
    • ETRI Journal
    • /
    • 제36권3호
    • /
    • pp.514-517
    • /
    • 2014
  • In this paper, alternative dynamic features for speech recognition are proposed. The goal of this work is to improve speech recognition accuracy by deriving the representation of distinctive dynamic characteristics from a speech spectrum. This work was inspired by two temporal dynamics of a speech signal. One is the highly non-stationary nature of speech, and the other is the inter-frame change of a speech spectrum. We adopt the use of a sub-frame spectrum analyzer to capture very rapid spectral changes within a speech analysis frame. In addition, we attempt to measure spectral fluctuations of a more complex manner as opposed to traditional dynamic features such as delta or double-delta. To evaluate the proposed features, speech recognition tests over smartphone environments were conducted. The experimental results show that the feature streams simply combined with the proposed features are effective for an improvement in the recognition accuracy of a hidden Markov model-based speech recognizer.

Differentiation of Aphasic Patients from the Normal Control Via a Computational Analysis of Korean Utterances

  • Kim, HyangHee;Choi, Ji-Myoung;Kim, Hansaem;Baek, Ginju;Kim, Bo Seon;Seo, Sang Kyu
    • International Journal of Contents
    • /
    • 제15권1호
    • /
    • pp.39-51
    • /
    • 2019
  • Spontaneous speech provides rich information defining the linguistic characteristics of individuals. As such, computational analysis of speech would enhance the efficiency involved in evaluating patients' speech. This study aims to provide a method to differentiate the persons with and without aphasia based on language usage. Ten aphasic patients and their counterpart normal controls participated, and they were all tasked to describe a set of given words. Their utterances were linguistically processed and compared to each other. Computational analyses from PCA (Principle Component Analysis) to machine learning were conducted to select the relevant linguistic features, and consequently to classify the two groups based on the features selected. It was found that functional words, not content words, were the main differentiator of the two groups. The most viable discriminators were demonstratives, function words, sentence final endings, and postpositions. The machine learning classification model was found to be quite accurate (90%), and to impressively be stable. This study is noteworthy as it is the first attempt that uses computational analysis to characterize the word usage patterns in Korean aphasic patients, thereby discriminating from the normal group.