• Title/Summary/Keyword: Speech Learning Model

Search Result 190, Processing Time 0.022 seconds

Performance comparison evaluation of speech enhancement using various loss functions (다양한 손실 함수를 이용한 음성 향상 성능 비교 평가)

  • Hwang, Seo-Rim;Byun, Joon;Park, Young-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.176-182
    • /
    • 2021
  • This paper evaluates and compares the performance of the Deep Nerual Network (DNN)-based speech enhancement models according to various loss functions. We used a complex network that can consider the phase information of speech as a baseline model. As the loss function, we consider two types of basic loss functions; the Mean Squared Error (MSE) and the Scale-Invariant Source-to-Noise Ratio (SI-SNR), and two types of perceptual-based loss functions, including the Perceptual Metric for Speech Quality Evaluation (PMSQE) and the Log Mel Spectra (LMS). The performance comparison was performed through objective evaluation and listening tests with outputs obtained using various combinations of the loss functions. Test results show that when a perceptual-based loss function was combined with MSE or SI-SNR, the overall performance is improved, and the perceptual-based loss functions, even exhibiting lower objective scores showed better performance in the listening test.

A Comparative Study on Intonation between Korean, French and English: a ToBI approach

  • Lee, Jung-Won
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.89-110
    • /
    • 2002
  • Intonation is very difficult to describe and it is furthermore difficult to compare intonation between different languages because of their differences of intonation systems. This paper aims to compare some intonation phenomena between Korean, French and English. In this paper I will refer to ToBI (the Tone and Break Indices) which is a prosodic transcription model proposed originally by Pierrehumbert (1980) as a description tool. In the first part, I will summarize different ToBI systems, namely, K-ToBI (Korean ToBI), F-ToBI (French ToBI) and ToBI itself (English ToBI) in order to compare the differences of three languages within prosody. In the second part, I will analyze some tokens registered by Korean, French and American in different languages to show the difficulties of learning other languages and to find the prosodic cues to pronounce correctly other languages. The point of comparison in this study is the Accentual Phrase (AP) in Korean and in French and the intermediate phrase (ip) in English, which I will call ' subject phrase ' in this study for convenience.

  • PDF

Study of Machine-Learning Classifier and Feature Set Selection for Intent Classification of Korean Tweets about Food Safety

  • Yeom, Ha-Neul;Hwang, Myunggwon;Hwang, Mi-Nyeong;Jung, Hanmin
    • Journal of Information Science Theory and Practice
    • /
    • v.2 no.3
    • /
    • pp.29-39
    • /
    • 2014
  • In recent years, several studies have proposed making use of the Twitter micro-blogging service to track various trends in online media and discussion. In this study, we specifically examine the use of Twitter to track discussions of food safety in the Korean language. Given the irregularity of keyword use in most tweets, we focus on optimistic machine-learning and feature set selection to classify collected tweets. We build the classifier model using Naive Bayes & Naive Bayes Multinomial, Support Vector Machine, and Decision Tree Algorithms, all of which show good performance. To select an optimum feature set, we construct a basic feature set as a standard for performance comparison, so that further test feature sets can be evaluated. Experiments show that precision and F-measure performance are best when using a Naive Bayes Multinomial classifier model with a test feature set defined by extracting Substantive, Predicate, Modifier, and Interjection parts of speech.

Text-Independent Speaker Identification System Using Speaker Decision Network Based on Delayed Summing (지연누적에 기반한 화자결정회로망이 도입된 구문독립 화자인식시스템)

  • 이종은;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.82-95
    • /
    • 1998
  • In this paper, we propose a text-independent speaker identification system which has a classifier composed of two parts; to calculate the degree of likeness of each speech frame and to select the most probable speaker from the entire speech duration. The first part is realized using RBFN which is selforganized through learning and in the second part the speaker is determined using a con-tbination of MAXNET and delayed summings. And we use features from linear speech production model and features from fractal geometry. Closed-set speaker identification experiments on 13 male homogeneous speakers show that the proposed techniques can achieve the identification ratio of 100% as the number of delays increases.

  • PDF

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Recognition of Restricted Continuous Korean Speech Using Perceptual Model (인지 모델을 이용한 제한된 한국어 연속음 인식)

  • Kim, Seon-Il;Hong, Ki-Won;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.61-70
    • /
    • 1995
  • In this paper, the PLP cepstrum which is close to human perceptual characteristics was extracted through the spread time area to get the temperal feature. Phonemes were recognized by artificial neural network similar to the learning method of human. The phoneme strings were matched by Markov models which well suited for sequence. Phoneme recognition for the continuous Korean speech had been done using speech blocks in which speech frames were gathered with unequal numbers. We parameterized the blocks using 7th order PLPs, PTP, zero crossing rate and energy, which neural network used as inputs. The 100 data composed of 10 Korean sentences which were taken from the speech two men pronounced five times for each sentence were used for the the recognition. As a result, maximum recognition rate of 94.4% was obtained. The sentence was recognized using Markov models generated by the phoneme strings recognized from earlier results the recognition for the 200 data which two men sounded 10 times for each sentence had been carried out. The sentence recognition rate of 92.5% was obtained.

  • PDF

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Essential technical and intellectual abilities for autonomous mobile service medical robots

  • Rogatkin, Dmitry A.;Velikanov, Evgeniy V.
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.59-68
    • /
    • 2018
  • Autonomous mobile service medical robots (AMSMRs) are one of the promising developments in contemporary medical robotics. In this study, we consider the essential technical and intellectual abilities needed by AMSMRs. Based on expert analysis of the behavior exhibited by AMSMRs in clinics under basic scenarios, these robots can be classified as intellectual dynamic systems acting according to a situation in a multi-object and multi-agent environment. An AMSMR should identify different objects that define the presented territory (rooms and paths), different objects between and inside rooms (doors, tables, and beds, among others), and other robots. They should also identify the means for interacting with these objects, people and their speech, different information for communication, and small objects for transportation. These are included in the minimum set required to form the internal world model in an AMSMR. Recognizing door handles and opening doors are some of the most difficult problems for contemporary AMSMRs. The ability to recognize the meaning of human speech and actions and to assist them effectively are other problems that need solutions. These unresolved issues indicate that AMSMRs will need to pass through some learning and training programs before starting real work in hospitals.

Research Trends for the Deep Learning-based Metabolic Rate Calculation (재실자 활동량 산출을 위한 딥러닝 기반 선행연구 동향)

  • Park, Bo-Rang;Choi, Eun-Ji;Lee, Hyo Eun;Kim, Tae-Won;Moon, Jin Woo
    • KIEAE Journal
    • /
    • v.17 no.5
    • /
    • pp.95-100
    • /
    • 2017
  • Purpose: The purpose of this study is to investigate the prior art based on deep learning to objectively calculate the metabolic rate which is the subjective factor for the PMV optimum control and to make a plan for future research based on this study. Methods: For this purpose, the theoretical and technical review and applicability analysis were conducted through various documents and data both in domestic and foreign. Results: As a result of the prior art research, the machine learning model of artificial neural network and deep learning has been used in various fields such as speech recognition, scene recognition, and image restoration. As a representative case, OpenCV Background Subtraction is a technique to separate backgrounds from objects or people. PASCAL VOC and ILSVRC are surveyed as representative technologies that can recognize people, objects, and backgrounds. Based on the results of previous researches on deep learning based on metabolic rate for occupational metabolic rate, it was found out that basic technology applicable to occupational metabolic rate calculation technology to be developed in future researches. It is considered that the study on the development of the activity quantity calculation model with high accuracy will be done.

A Novel Model, Recurrent Fuzzy Associative Memory, for Recognizing Time-Series Patterns Contained Ambiguity and Its Application (모호성을 포함하고 있는 시계열 패턴인식을 위한 새로운 모델 RFAM과 그 응용)

  • Kim, Won;Lee, Joong-Jae;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.449-456
    • /
    • 2004
  • This paper proposes a novel recognition model, a recurrent fuzzy associative memory(RFAM), for recognizing time-series patterns contained an ambiguity. RFAM is basically extended from FAM(Fuzzy Associative memory) by adding a recurrent layer which can be used to deal with sequential input patterns and to characterize their temporal relations. RFAM provides a Hebbian-style learning method which establishes the degree of association between input and output. The error back-propagation algorithm is also adopted to train the weights of the recurrent layer of RFAM. To evaluate the performance of the proposed model, we applied it to a word boundary detection problem of speech signal.