• Title/Summary/Keyword: speech file

Search Result 31, Processing Time 0.018 seconds

The Acoustic Analysis of Korean Read Speech - with respect to the prosodic phrasing - (한국어 낭독체 문장의 음향분석 -바람과 햇님의 운율구 생성을 중심으로-)

  • Sung Chuljae
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.157-172
    • /
    • 1996
  • This study aims to suggest some theoretical methodology for analysis of the prosodic patterns in Korean Read Speech. The engineering effort relevant to the phonetic study has focused to the importance of prosodic phrasing which may play a major role in analyzing the phonetic DB. Before establishing the prosodic phrase as the prosodic unit, we should describe the features of the boundary signal in a target sentence. With this in mind, the general characteristics of Read Speech and the ToBI(tones and Break Indices), which has been currently in vogue with respect to the prosodic labelling system were presented as the first step. The concrete analysis was carried out with the fable 'North Wind and the Sun' Korean version, where about 25 prosodic units were discriminated by perceptual approach for 5 subjects. Establishing various informations which can be used for deciding a boundary position systematically, we can proceed to the next, viz. acoustic analysis of prosodic unit. The most important which we primarily study for improving the naturalness of synthetic speech may be, at first, detecting the boundary signals in the speech file and accordingly reestablishment it within the raw text.

  • PDF

Design and Implementation of a Speech Synthesis Engine and a Plug-in for Internet Web Page (인터넷 웹페이지의 음성합성을 위한 엔진 및 플러그-인 설계 및 구현)

  • Lee, Hee-Man;Kim, Ji-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.461-469
    • /
    • 2000
  • In the paper, the design and the implementation of the netscape plug-in and the speech synthesis enginegenerating the speech sounds from the text information of the web pages are described. The steps of the generating speech sound from an web pages are the speech synthesis plug-in is activated when the netscape finds the audio/xesp MIME data type embedded in the browsed web page; the HTML file referenced in the EMBED MTML tag is down loaded from the referenced URL to send to the commander object located in the said plug-in; The speech synthesis engine control tags and the text characters are extracted from the down loaded HTML document by the commander object the synthesized speech sounds are generated by the speech synthesis engine. The speech synthesis engine interprets the command streams from the commander objects to call the member functions for the processing of the speech segment data in the data banks. The commander object and the speech synthesis engine are designed as an independent object to enhancethe flexitility and the portability.

  • PDF

Research on Construction of the Korean Speech Corpus in Patient with Velopharyngeal Insufficiency (구개인두부전증 환자의 한국어 음성 코퍼스 구축 방안 연구)

  • Lee, Ji-Eun;Kim, Wook-Eun;Kim, Kwang Hyun;Sung, Myung-Whun;Kwon, Tack-Kyun
    • Korean Journal of Otorhinolaryngology-Head and Neck Surgery
    • /
    • v.55 no.8
    • /
    • pp.498-507
    • /
    • 2012
  • Background and Objectives We aimed to develop a Korean version of the velopharyngeal insufficiency (VPI) speech corpus system. Subjects and Method After developing a 3-channel simultaneous speech recording device capable of recording nasal/oral and normal compound speech separately, voice data were collected from VPI patients aged more than 10 years with/without the history of operation or prior speech therapy. This was compared to a control group for which VPI was simulated by using a french-3 nelaton tube inserted via both nostril through nasopharynx and pulling the soft palate anteriorly in varying degrees. The study consisted of three transcriptors: a speech therapist transcribed the voice file into text, a second transcriptor graded speech intelligibility and severity and the third tagged the types and onset times of misarticulation. The database were composed of three main tables regarding (1) speaker's demographics, (2) condition of the recording system and (3) transcripts. All of these were interfaced with the Praat voice analysis program, which enables the user to extract exact transcribed phrases for analysis. Results In the simulated VPI group, the higher the severity of VPI, the higher the nasalance score was obtained. In addition, we could verify the vocal energy that characterizes hypernasality and compensation in nasal/oral and compound sounds spoken by VPI patients as opposed to that characgerizes the normal control group. Conclusion With the Korean version of VPI speech corpus system, patients' common difficulties and speech tendencies in articulation can be objectively evaluated. Comparing these data with those of the normal voice, mispronunciation and dysarticulation of patients with VPI can be corrected.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

Voice Message System Supporting Massive Outbound Call (대량의 발신 호를 지원하는 음성 메시지 시스템)

  • Kim Jeonggon
    • MALSORI
    • /
    • no.49
    • /
    • pp.77-94
    • /
    • 2004
  • In this paper, new voice message system supporting massive outbound call is proposed. Basic idea of the proposed system is to pre-process all the text-to-speech conversion process, mixing of text and attached music file and to store the results of pre-process in the cache server which is connected to the IVR. New voice message system is optimized for the voice message system supporting massive outbound call by distributing the load of the web server caused by server-side script implementation which is accessing database and generating dynamic Voice XML document over client module and server module of web server. The proposed voice message system was test-deployed in one domestic voice message application service provider and it is shown that proposed voice message system reduced the response latency problem of test-bed voice message system.

  • PDF

Design of pitch parameter search architecture for a speech coder using dual MACs (Dual MAC을 이용한 음성 부호화기용 피치 매개변수 검색 구조 설계)

  • 박주현;심재술;김영민
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.33A no.5
    • /
    • pp.172-179
    • /
    • 1996
  • In the paper, QCELP (qualcomm code excited linear predictive), CDMA (code division multiple access)'s vocoder algorithm, was analyzed. And then, a ptich parameter seaarch architecture for 16-bit programmable DSP(digital signal processor) for QCELP was designed. Because we speed up the parameter search through high speed DSP using two MACs, we can satisfy speech codec specifiction for the digital celluar. Also, we implemented in FIFO(first-in first-out) memory using register file to increase the access time of data. This DSP was designed using COMPASS, ASIC design tool, by top-down design methodology. Therefore, it is possible to cope with rapid change at mobile communication market.

  • PDF

Learning French Intonation with a Base of the Visualization of Melody (억양의 시각화를 통한 프랑스어의 억양학습)

  • Lee, Jung-Won
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.63-71
    • /
    • 2003
  • This study aims to experiment on learning French intonation, based on the visualization of melody, which was employed in the early sixties to reeducate those with communication disorders. The visualization of melody in this paper, however, was used to the foreign language learning and produced successful results in many ways, especially in learning foreign intonation. In this paper, we used the PitchWorks to visualize some French intonation samples and experiment on learning intonation based on the bitmap picture projected on a screen. The students could see the melody curve while listening to the sentences. We could observe great achievement on the part of the students in learning intonations, as verified by the result of this experiment. The students were much more motivated in learning and showed greater improvement in recognizing intonation contour than just learning by hearing. But lack of animation in the bitmap file could make the experiment nothing but a boring pattern practices. It would be better if we can use a sound analyser, as like for instance a PitchWorks, which is designed to analyse the pitch, since the students can actually see their own fluctuating intonation visualized on the screen.

  • PDF

Design of a dedicated DSP core for speech coder using dual MACs (Dual MAC를 이용한 음성 부호화기용 DSP Core 설계에 관한 연구)

  • 박주현
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.137-140
    • /
    • 1995
  • In the paper, CDMA's vocoder algorithm, QCELP, was analyzed. And, 16-bit programmable DSP core for QCELP was designed. When it is used two MACs in DSP, we can implement low-power DSP and estimate decrease of parameter computation speed. Also, we implemented in FIFO memory using register file to increase the access time of the data. This DSP was designed using logic synthesis tool, COMPASS, by top-down design methodology. Therefore, it is possible to cope with rapid change at mobile communication market.

  • PDF

Implementation of Speech Recognition and Flight Controller Based on Deep Learning for Control to Primary Control Surface of Aircraft

  • Hur, Hwa-La;Kim, Tae-Sun;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.9
    • /
    • pp.57-64
    • /
    • 2021
  • In this paper, we propose a device that can control the primary control surface of an aircraft by recognizing speech commands. The speech command consists of 19 commands, and a learning model is constructed based on a total of 2,500 datasets. The training model is composed of a CNN model using the Sequential library of the TensorFlow-based Keras model, and the speech file used for training uses the MFCC algorithm to extract features. The learning model consists of two convolution layers for feature recognition and Fully Connected Layer for classification consists of two dense layers. The accuracy of the validation dataset was 98.4%, and the performance evaluation of the test dataset showed an accuracy of 97.6%. In addition, it was confirmed that the operation was performed normally by designing and implementing a Raspberry Pi-based control device. In the future, it can be used as a virtual training environment in the field of voice recognition automatic flight and aviation maintenance.

A Study on the Signal Processing for Content-Based Audio Genre Classification (내용기반 오디오 장르 분류를 위한 신호 처리 연구)

  • 윤원중;이강규;박규식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.271-278
    • /
    • 2004
  • In this paper, we propose a content-based audio genre classification algorithm that automatically classifies the query audio into five genres such as Classic, Hiphop, Jazz, Rock, Speech using digital sign processing approach. From the 20 seconds query audio file, the audio signal is segmented into 23ms frame with non-overlapped hamming window and 54 dimensional feature vectors, including Spectral Centroid, Rolloff, Flux, LPC, MFCC, is extracted from each query audio. For the classification algorithm, k-NN, Gaussian, GMM classifier is used. In order to choose optimum features from the 54 dimension feature vectors, SFS(Sequential Forward Selection) method is applied to draw 10 dimension optimum features and these are used for the genre classification algorithm. From the experimental result, we can verify the superior performance of the proposed method that provides near 90% success rate for the genre classification which means 10%∼20% improvements over the previous methods. For the case of actual user system environment, feature vector is extracted from the random interval of the query audio and it shows overall 80% success rate except extreme cases of beginning and ending portion of the query audio file.