• Title/Summary/Keyword: Speech Understanding

Search Result 193, Processing Time 0.025 seconds

Progress, challenges, and future perspectives in genetic researches of stuttering

  • Kang, Changsoo
    • Journal of Genetic Medicine
    • /
    • v.18 no.2
    • /
    • pp.75-82
    • /
    • 2021
  • Speech and language functions are highly cognitive and human-specific features. The underlying causes of normal speech and language function are believed to reside in the human brain. Developmental persistent stuttering, a speech and language disorder, has been regarded as the most challenging disorder in determining genetic causes because of the high percentage of spontaneous recovery in stutters. This mysterious characteristic hinders speech pathologists from discriminating recovered stutters from completely normal individuals. Over the last several decades, several genetic approaches have been used to identify the genetic causes of stuttering, and remarkable progress has been made in genome-wide linkage analysis followed by gene sequencing. So far, four genes, namely GNPTAB, GNPTG, NAGPA, and AP4E1, are known to cause stuttering. Furthermore, thegeneration of mouse models of stuttering and morphometry analysis has created new ways for researchers to identify brain regions that participate in human speech function and to understand the neuropathology of stuttering. In this review, we aimed to investigate previous progress, challenges, and future perspectives in understanding the genetics and neuropathology underlying persistent developmental stuttering.

Recurrent Neural Network with Backpropagation Through Time Learning Algorithm for Arabic Phoneme Recognition

  • Ismail, Saliza;Ahmad, Abdul Manan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1033-1036
    • /
    • 2004
  • The study on speech recognition and understanding has been done for many years. In this paper, we propose a new type of recurrent neural network architecture for speech recognition, in which each output unit is connected to itself and is also fully connected to other output units and all hidden units [1]. Besides that, we also proposed the new architecture and the learning algorithm of recurrent neural network such as Backpropagation Through Time (BPTT, which well-suited. The aim of the study was to observe the difference of Arabic's alphabet like "alif" until "ya". The purpose of this research is to upgrade the people's knowledge and understanding on Arabic's alphabet or word by using Recurrent Neural Network (RNN) and Backpropagation Through Time (BPTT) learning algorithm. 4 speakers (a mixture of male and female) are trained in quiet environment. Neural network is well-known as a technique that has the ability to classified nonlinear problem. Today, lots of researches have been done in applying Neural Network towards the solution of speech recognition [2] such as Arabic. The Arabic language offers a number of challenges for speech recognition [3]. Even through positive results have been obtained from the continuous study, research on minimizing the error rate is still gaining lots attention. This research utilizes Recurrent Neural Network, one of Neural Network technique to observe the difference of alphabet "alif" until "ya".

  • PDF

Joint streaming model for backchannel prediction and automatic speech recognition

  • Yong-Seok Choi;Jeong-Uk Bang;Seung Hi Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.118-126
    • /
    • 2024
  • In human conversations, listeners often utilize brief backchannels such as "uh-huh" or "yeah." Timely backchannels are crucial to understanding and increasing trust among conversational partners. In human-machine conversation systems, users can engage in natural conversations when a conversational agent generates backchannels like a human listener. We propose a method that simultaneously predicts backchannels and recognizes speech in real time. We use a streaming transformer and adopt multitask learning for concurrent backchannel prediction and speech recognition. The experimental results demonstrate the superior performance of our method compared with previous works while maintaining a similar single-task speech recognition performance. Owing to the extremely imbalanced training data distribution, the single-task backchannel prediction model fails to predict any of the backchannel categories, and the proposed multitask approach substantially enhances the backchannel prediction performance. Notably, in the streaming prediction scenario, the performance of backchannel prediction improves by up to 18.7% compared with existing methods.

A Study on the Korean Parts-of-Speech for Korean-English Machine Translation (기계번역용 한국어 품사에 관한 연구)

  • 송재관;박찬곤
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.4
    • /
    • pp.48-54
    • /
    • 2000
  • This Paper classified korean Parts-of-speech for korean-english machine translation and investigated morphological characters of each parts-of-speech. Korean standard grammar classified parts-of-speech by semantic, functional and formal character. Many rules make a difficulties the understanding of grammar structure and parts-of-speech classification and it is necessary to preprocess at machine translation. This paper classified korean parts-of-speech by one rule. The parts-of-speech suggested in this paper have a same syntactic role and same parts-of-speech with english dictionary, and express the structure of korean sentence. And also it can make target language by pattern matching in korean-english translation.

  • PDF

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Articulatory robotics (조음 로보틱스)

  • Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.1-7
    • /
    • 2021
  • Speech is a spatiotemporally coordinated structure of constriction actions at discrete articulators such as lips, tongue tip, tongue body, velum, and glottis. Like other human movements (e.g., reaching), each action as a linguistic task is completed by a synergy of involved basic elements (e.g., bone, muscle, neural system). This paper discusses how speech tasks are dynamically related to joints as one of the basic elements in terms of robotics of speech production. Further this introduction of robotics to speech sciences will hopefully deepen our understanding of how speech is produced and provide a solid foundation to developing a physical talking machine.

Phonological processes of consonants from orthographic to pronounced words in the Buckeye Corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.55-62
    • /
    • 2019
  • This paper investigates the phonological processes of consonants in pronounced words in the Buckeye Corpus and compares the frequency distribution of these processes to provide a clearer understanding of conversational English for linguists and teachers. Both orthographic and pronounced words were extracted from the transcribed label scripts of the Buckeye Corpus. Next, the phonological processes of consonants in the orthographic and pronounced labels were tabulated separately by onsets and codas, and a frequency distribution by consonant process types was examined. The results showed that the majority of the onset clusters were pronounced as the same sounds in the Buckeye Corpus. The participants in the corpus were presumed to speak semiformally. In addition, the onsets have fewer deletions than the codas, which might be related to the information weight of the syllable components. Moreover, there is a significant association and strong positive correlation between the phonological processes of the onsets and codas in men and women. This paper concludes that an analysis of phonological processes in spontaneous speech corpora can contribute to a practical understanding of spoken English. Further studies comparing the current phonological process data with those of other languages would be desirable to establish universal patterns in phonological processes.

Phonological processes of consonants from orthographic to pronounced words in the Seoul Corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2020
  • This paper investigates the phonological processes of consonants in pronounced words in the Seoul Corpus, and compares the frequency distribution of these processes to provide a clearer understanding of conversational Korean to linguists and teachers. To this end, both orthographic and pronounced words were extracted from the transcribed label scripts of the Seoul Corpus. Next, the phonological processes of consonants in the orthographic and pronounced forms were tabulated separately after syllabifying the onsets and codas, and major consonantal processes were examined. First, the results showed that the majority of the orthographic consonants' sounds were pronounced the same way as their pronounced forms. Second, more than three quarters of the onsets were pronounced as the same forms, while approximately half of the codas were pronounced as variants. Third, the majority of different onset and coda symbols were primarily caused by deletions and insertions. Finally, the five phonological process types accounted for only 12.4% of the total possible procedures. Based on these results, this paper concludes that an analysis of phonological processes in spontaneous speech corpora can improve the practical understanding of spoken Korean. Future studies ought to compare the current phonological process data with those of other languages to establish universal patterns in phonological processes.

Teaching English Restructuring and Post-lexical Phenomena (영어 발화의 재구조와 후-어휘 음운현상의 지도)

  • Lee Sunbeom
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.169-172
    • /
    • 2002
  • English is one of the stress-timed languages and has much more dynamic rhythm, stress and the tendency toward the isochronism of stressed syllables. It goes with various English utterance restructuring, irrespective of the pauses by syntactic boundaries, and post-lexically phonological phenomena. Specifically in the real speech acts, the natural utterances of fluent speakers or the broadcasting speech cause much more various English restructuring and phonological phenomena. This has been an obstacle for students in speaking fluent English and understanding normal speech. Therefore, this study tried to focus the most problematic factor in English speaking and listening difficulty on English restructuring and post-lexically phonological phenomena caused by stress-timed rhythm and, second, to point out the importance of teaching English rhythm bearing that in mind.

  • PDF

Acoustic Analysis of Normal and Pathologic Voice Synthesized with Voice Synthesis Program of Dr. Speech Science (Dr. Speech Science의 음성합성프로그램을 이용하여 합성한 정상음성과 병적음성(Pathologic Voice)의 음향학적 분석)

  • 최홍식;김성수
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.12 no.2
    • /
    • pp.115-120
    • /
    • 2001
  • In this paper, we synthesized vowel /ae/ with voice synthesis program of Dr. Speech Science, and we also synthesized pathologic vowel /ae/ by some parameters such as high frequency gain (HFG), low frequency gain(LFG), pitch flutter(PF) which represents jitter value and flutter of amplitude(FA) which represents shimmer value, and grade ranked as mild, moderate and severe respectively. And then we analysed all pathologic voice by analysis program of Dr. Speech Science. We expect that this synthesized pathologic voices are useful for understanding the parameter such as noise, jitter and shimmer and feedback effect to patient with voice disorder.

  • PDF