• 제목/요약/키워드: speech sound

검색결과 625건 처리시간 0.021초

원인을 모르는 말소리장애의 하위유형 분류 및 진단 표지에 관한 문헌 고찰 (A literature review on diagnostic markers and subtype classification of children with speech sound disorders)

  • 이루다;김수진
    • 말소리와 음성과학
    • /
    • 제14권2호
    • /
    • pp.87-99
    • /
    • 2022
  • 한국 버전의 진단지표 체계의 개발을 위해서는 먼저 국내의 말소리장애 연구에 사용된 지표들을 망라하여 검토할 필요가 있다. 본 문헌 검토에서는 원인을 모르는 말소리장애 아동을 대상으로 국내에서 어떤 연구들이 수행되었는지 살펴보았다. 국내 연구자들은 말소리장애 아동의 특성을 밝히기 위한 지표로 다양한 변수들을 사용하였으며, 여기에는 표면적인 말소리 특성과 관련된 지표들과, 그 외 동반 특성들에 관한 지표들이 포함되었다. 검토 결과, 지금까지 국내 말소리 특정 지표들에 관심이 집중되어 온 것이 확인되었는데, 어떤 지표들은 그 영향력으로 인해 다양한 측면에서 면밀한 연구가 필요할 수도 있고, 어떤 지표들은 제한적인 연구 수로 인해 더 많은 관심이 요구될 수도 있다. 본 리뷰에서는 원인을 모르는 말소리장애 아동들이 보이는 고유의 특성들에 대한 보다 포괄적인 연구의 필요성과 말소리장애의 하위유형 분류 및 진단 표지에 관한 추후 연구의 방향성에 대해 논의했다. 또한 말소리장애 하위 분류를 위한 잠재적 진단 표지와 평가 목록을 제안하였다.

Discrete Wavelet Transform을 이용한 음성 추출에 관한 연구 (A Study Of The Meaningful Speech Sound Block Classification Based On The Discrete Wavelet Transform)

  • 백한욱;정진현
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.2905-2907
    • /
    • 1999
  • The meaningful speech sound block classification provides very important information in the speech recognition. The following technique of the classification is based on the DWT (discrete wavelet transform), which will provide a more fast algorithm and a useful, compact solution for the pre-processing of speech recognition. The algorithm is implemented to the unvoiced/voiced classification and the denoising.

  • PDF

Acoustic Analysis of Speech Disorder Associated with Motor Aphasia - A Case Report -

  • Ko, Myung-Hwan;Kim, Hyun-Ki;Kim, Yun-Hee
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.97-107
    • /
    • 2000
  • Motor aphasia is an affection frequently caused by insult of the left middle cerebral artery and usually accompanied by a large lesion involving the Broca's area and the adjacent motor and premotor areas. Therefore, a patient with motor aphasia commonly shows articulatory disturbances due to failure of the motor programing of speech sound. Objective assessment and treatment of phonologic programing is one of the important aspects of speech therapy in aphasic patients. We analyzed the speech disorders acompanied with motor aphasia in a 45-year-old man using a computerized sound spectrograph, Visi-$Pitch{\circledR}$, and Multi-Dimensional Voice $Program{\circledR}$. We concluded that a computerized speech analysis system is a useful tool to visualize and quantitatively analyse the severity and progression of dysarthria, and the effect of speech therapy.

  • PDF

하모닉 구조를 이용한 두 명의 동시 발화 화자의 위치 추정 (Two Simultaneous Speakers Localization using harmonic structure)

  • 김현경;임성길;이현수
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.121-124
    • /
    • 2005
  • In this paper, we propose a sound localization algorithm for two simultaneous speakers. Because speech is wide-band signal, there are many frequency sub-bands in that two speech sounds are mixed. However, in some sub-bands, one speech sound is more dominant than other sounds. In such sub-bands, dominant speech sounds are little interfered by other speech or noise. In speech sounds, overtones of fundamental frequency have large amplitude, and that are called 'Harmonic structure of speech'. Sub-bands inharmonic structure are more likely dominant. Therefore, the proposed localization algorithm is based on harmonic structure of each speakers. At first, sub-bands that belong to harmonic structure of each speech signal are selected. And then, two speakers are localized using selected sub-bands. The result of simulation shows that localization using selected sub-bands are more efficient and precise than localization methods using all sub-bands.

  • PDF

A DSP Implementation of Subband Sound Localization System

  • Park, Kyusik
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권4E호
    • /
    • pp.52-60
    • /
    • 2001
  • This paper describes real time implementation of subband sound localization system on a floating-point DSP TI TMS320C31. The system determines two dimensional location of an active speaker in a closed room environment with real noise presents. The system consists of an two microphone array connected to TI DSP hosted by PC. The implemented sound localization algorithm is Subband CPSP which is an improved version of traditional CPSP (Cross-Power Spectrum Phase) method. The algorithm first split the input speech signal into arbitrary number of subband using subband filter banks and calculate the CPSP in each subband. It then averages out the CPSP results on each subband and compute a source location estimate. The proposed algorithm has an advantage over CPSP such that it minimize the overall estimation error in source location by limiting the specific band dominant noise to that subband. As a result, it makes possible to set up a robust real time sound localization system. For real time simulation, the input speech is captured using two microphone and digitized by the DSP at sampling rate 8192 hz, 16 bit/sample. The source location is then estimated at once per second to satisfy real-time computational constraints. The performance of the proposed system is confirmed by several real time simulation of the speech at a distance of 1m, 2m, 3m with various speech source locations and it shows over 5% accuracy improvement for the source location estimation.

  • PDF

한국어 CV단음절의 음소합성 (The Phoneme Synthesis of Korean CV Mono-Syllables)

  • 안점영;김명기
    • 한국통신학회논문지
    • /
    • 제11권2호
    • /
    • pp.93-100
    • /
    • 1986
  • 子音 音素/ㄱ, ㄷ, ㅂ, ㅈ/과 이에 대응한 硬音, 激音 그리고 母音 音素/ㅏ, ㅓ, ㅗ, ㅜ, ㅣ/로 구성된 韓國語 CV單音節을 偏自己相關方式으로 分析하고, 分析된 parameter를 적절히 제어하여 音素合成方法으로 이들 音節을 合成하였다. 분석결과 자음길이는 激音일 때 제일 길고, 硬音이 가장 짧았으며 이 音들의 gain도 비슷한 변화를 나타내었다. 그리고 平音뒤의 모음 pitch 주기가 가장 길고, 硬音, 激音으로 바뀌면 pitch주기가 짧아졌다. 子音 音素는 激音의 길이와 gain을 제어하여 합성하고 母音 音素는 平音뒤에 오는 母音의 pitch와 길이를 제어하여 합성하였다. 子音과 母音 音素를 結合시켜 CV單音節을 合成하였다. 實驗結果 合成音質은 대체로 양호하였고, 韓國語 音聲의 音素合成에 필요한 規則作成의 可能性을 확인하였다.

  • PDF

Fillers in the Hong Kong Corpus of Spoken English (HKCSE)

  • Seto, Andy
    • 아시아태평양코퍼스연구
    • /
    • 제2권1호
    • /
    • pp.13-22
    • /
    • 2021
  • The present study employed an analytical framework that is characterised by a synthesis of quantitative and qualitative analyses with a specially designed computer software SpeechActConc to examine speech acts in business communication. The naturally occurring data from the audio recordings and the prosodic transcriptions of the business sub-corpora of the HKCSE (prosodic) are manually annotated with a speech act taxonomy for finding out the frequency of fillers, the co-occurring patterns of fillers with other speech acts, and the linguistic realisations of fillers. The discoursal function of fillers to sustain the discourse or to hold the floor has diverse linguistic realisations, ranging from a sound (e.g. 'uhuh') and a word (e.g. 'well') to sounds (e.g. 'um er') and words, namely phrase ('sort of') and clause (e.g. 'you know'). Some are even combinations of sound(s) and word(s) (e.g. 'and um', 'yes er um', 'sort of erm'). Among the top five frequent linguistic realisations of fillers, 'er' and 'um' are the most common ones found in all the six genres with relatively higher percentages of occurrence. The remaining more frequent realisations consist of clause ('you know'), word ('yeah') and sound ('erm'). These common forms are syntactically simpler than the less frequent realisations found in the genres. The co-occurring patterns of fillers and other speech acts are diverse. The more common co-occurring speech acts with fillers include informing and answering. The findings show that fillers are not only frequently used by speakers in spontaneous conversation but also mostly represented in sounds or non-linguistic realisations.

다이폰을 이용한 한국어 문자-음성 변환 시스템의 설계 및 구현 (Design and Implementation of Korean Tet-to-Speech System)

  • 정준구
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 제11회 음성통신 및 신호처리 워크샵 논문집 (SCAS 11권 1호)
    • /
    • pp.91-94
    • /
    • 1994
  • This paper is a study on the design and implementation of the Korean Tet-to-Speech system. In this paper, parameter symthesis method is chosen for speech symthesis method and PARCOR coeffient, one of the LPC analysis, is used as acoustic parameter, We use a diphone as synthesis unit, it include a basic naturalness of human speech. Diphone DB is consisted of 1228 PCM files. LPC synthesis method has defect that decline clearness of synthesis speech, during synthesizing unvoiced sound In this paper, we improve clearness of synthesized speech, using residual signal as ecitation signal of unvoiced sound. Besides, to improve a naturalness, we control the prosody of synthesized speech through controlling the energy and pitch pattern. Synthesis system is implemented at PC/486 and use a 70Hz-4.5KHz band pass filter for speech imput/output, amplifier and TMS320c30 DSP board.

  • PDF

T자형 복도 공간의 비상 방송용 확성기 배치별 음압 레벨과 음성 명료도 비교 (Comparison of Sound Pressure Level and Speech Intelligibility of Emergency Broadcasting System at T-junction Corridor Space)

  • 정정호;이성찬
    • 한국화재소방학회논문지
    • /
    • 제33권1호
    • /
    • pp.105-112
    • /
    • 2019
  • 본 연구에서는 T자형의 복도 연결 공간에서 비상 방송음이 명료하고 고르게 전달되는지를 건축음향 시뮬레이션을 이용하여 알아보았다. 복도 공간의 흡음성능 변화, 비상 방송용 확성기의 설치 위치와 간격을 변화시켜 보았으며 변화에 따른 음압 레벨 분포, 음성 전달 지수(STI, RASTI) 분포를 비교하였다. 시뮬레이션 결과 명료한 음성 전달을 위해서는 비상 방송용 확성기를 T자형 복도 연결부의 중심에서 약 10 m를 이격시켜 설치하는 것이 좋은 것으로 나타났다. NFSC의 25 m 설치 간격을 좁히는 경우 더욱 명료하고 충분한 음량을 갖는 비상 방송음이 고르게 전달될 수 있는 것으로 나타났다.

An acoustic and perceptual investigation of the vowel length contrast in Korean

  • Lee, Goun;Shin, Dong-Jin
    • 말소리와 음성과학
    • /
    • 제8권1호
    • /
    • pp.37-44
    • /
    • 2016
  • The goal of the current study is to investigate how the sound change is reflected in production or in perception, and what the effect of lexical frequency is on the loss of sound contrasts. Specifically, the current study examined whether the vowel length contrasts are retained in Korean speakers' productions, and whether Korean listeners can distinguish vowel length minimal pairs in their perception. Two production experiments and two perception experiments investigated this. For production tests, twelve Korean native speakers in their 20s and 40s completed a read-aloud task as well as a map-task. The results showed that, regardless of their age group, all Korean speakers produced vowel length contrasts with a small but significant differences in the read-aloud test. Interestingly, the difference between long and short vowels has disappeared in the map task, indicating that the speech mode affects producing vowel length contrasts. For perception tests, thirty-three Korean listeners completed a discrimination and a forced-choice identification test. The results showed that Korean listeners still have a perceptual sensitivity to distinguish lexical meaning of the vowel length minimal pair. We also found that the identification accuracy was affected by the word frequency, showing a higher identification accuracy in high- and mid- frequency words than low frequency words. Taken together, the current study demonstrated that the speech mode (read-aloud vs. spontaneous) affects the production of the sound undergoing a language change; and word frequency affects the sound change in speech perception.