• Title/Summary/Keyword: Speech Interface

Search Result 251, Processing Time 0.026 seconds

Effects of Prosodic Strengthening on the Production of English High Front Vowels /i, ɪ/ by Native vs. Non-Native Speakers (원어민과 비원어민의 영어 전설 고모음 /i, ɪ/ 발화에 나타나는 운율 강화 현상)

  • Kim, Sahyang;Hur, Yuna;Cho, Taehong
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.129-136
    • /
    • 2013
  • This study investigated how acoustic characteristics (i.e., duration, F1, F2) of English high front vowels /i, ɪ/ are modulated by boundary- and prominence-induced strengthening in native vs. non-native (Korean) speech production. The study also examined how the durational difference in vowels due to the voicing of a following consonant (i.e., voiced vs. voiceless) is modified by prosodic strengthening in two different (native vs. non-native) speaker groups. Five native speakers of Canadian English and eight Korean learners of English (intermediate-advanced level) produced 8 minimal pairs with the CVC sequence (e.g., 'beat'-'bit') in varying prosodic contexts. Native speakers distinguished the two vowels in terms of duration, F1, and F2, whereas non-native speakers only showed durational differences. The two groups were similar in that they maximally distinguished the two vowels when the vowels were accented (F2, duration), while neither group showed boundary-induced strengthening in any of the three measurements. The durational differences due to the voicing of the following consonant were also maximized when accented. The results are discussed further in terms of phonetics-prosody interface in L2 production.

Syllabic Speech Rate Control for Improving Elderly Speech Recognition of Smart Devices (음절 별 발화속도 조절을 통한 노인 음석인식 개선)

  • Kyeong, Ju Won;Son, Gui Young;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1711-1714
    • /
    • 2015
  • 스마트 디바이스가 사회와 소통할 수 있는 도구가 되었음에도 불구하고 아직까지 노인들이 사용하기에는 어려움이 있다. 여기에 음성인식 기술을 이용한 음성인터페이스를 활용함으로써 노인들의 스마트 디바이스에 대한 사용성을 높일 수 있다. 하지만 일반적인 음성인식 시스템은 청장년의 발성 스타일에 맞춰져 있기 때문에, 노화된 노인의 발성이 그대로 입력될 경우 음성인식률이 하락한다. 본 연구에서는 노인의 음절 별 발화속도가 일반적인 음성인식 시스템의 성능을 보증할 수 있는 범위를 벗어나는 경우가 많다는 분석 결과를 토대로 노인의 음절 별 발화속도를 조정한 결과 노인남녀 평균 음성인식률이 15.3% 상승하였다. 이처럼 노인의 음성인식 오류 원인들 중 하나인 발화속도의 재조정으로 음성 인식률을 높일 수 있는 토대를 마련하였다. 이는 노인들이 스마트 디바이스를 이용하여 쉽고 정확한 작업을 수행할 수 있게 됨으로써, 노인들의 사회 참여와 정보 획득이 용이해 지고 더 나아가 세대 간의 소통에도 이바지할 것으로 기대한다.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Design and Implementation of Context-aware Application on Smartphone Using Speech Recognizer

  • Kim, Kyuseok
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.49-59
    • /
    • 2020
  • As technologies have been developing, our lives are getting easier. Today we are surrounded by the new technologies such as AI and IoT. Moreover, the word, "smart" is a very broad one because we are trying to change our daily environment into smart one by using those technologies. For example, the traditional workplaces have changed into smart offices. Since the 3rd industrial revolution, we have used the touch interface to operate the machines. In the 4th industrial revolution, however, we are trying adding the speech recognition module to the machines to operate them by giving voice commands. Today many of the things are communicated with human by voice commands. Many of them are called AI things and they do tasks which users request and do tasks more than what users request. In the 4th industrial revolution, we use smartphones all the time every day from the morning to the night. For this reason, the privacy using phone is not guaranteed sometimes. For example, the caller's voice can be heard through the phone speaker when accepting a call. So, it is needed to protect privacy on smartphone and it should work automatically according to the user context. In this aspect, this paper proposes a method to adjust the voice volume for call to protect privacy on smartphone according to the user context.

An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications (다음색 감정 음성합성 응용을 위한 감정 SSML 처리기)

  • Ryu, Se-Hui;Cho, Hee;Lee, Ju-Hyun;Hong, Ki-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.523-529
    • /
    • 2021
  • In this paper, we designed and developed an Emotional Speech Synthesis Markup Language (SSML) processor. Multi-speaker emotional speech synthesis technology that can express multiple voice colors and emotional expressions have been developed, and we designed Emotional SSML by extending SSML for multiple voice colors and emotional expressions. The Emotional SSML processor has a graphic user interface and consists of following four components. First, a multi-speaker emotional text editor that can easily mark specific voice colors and emotions on desired positions. Second, an Emotional SSML document generator that creates an Emotional SSML document automatically from the result of the multi-speaker emotional text editor. Third, an Emotional SSML parser that parses the Emotional SSML document. Last, a sequencer to control a multi-speaker and emotional Text-to-Speech (TTS) engine based on the result of the Emotional SSML parser. Based on SSML which is a programming language and platform independent open standard, the Emotional SSML processor can easily integrate with various speech synthesis engines and facilitates the development of multi-speaker emotional text-to-speech applications.

Implementation of the ACELP/MPMLQ-Based Dual-Rate Voice Coder Using DSP (ACELP/MP-MLQ에 기초한 dual-rate 음성 코더의 DSP 구현)

  • Lee Jae-Sik;Son Yong-Ki;Jeon Il;Chang Tae-Gyu;Min Byoung-Ki
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.51-54
    • /
    • 2000
  • This paper describes the fixed-point DSP implementation of a CELP(code-excited linear prediction)-based speech coder. The effective realization methodologies to maximize the utilization of the DSP's architectural features, specifically Parallel movement and pipelining are also presented together with the implementation results targeted for the ITU-T standard G.723.1 using Motorola DSP56309. The operation of the implemented speech coder is verified using the test vectors offered by the standard as well as using the peripheral interface circuits designed for the coder's real-time operation.

  • PDF

Perception Ability of Synthetic Vowels in Cochlear Implanted Children (모음의 포먼트 변형에 따른 인공와우 이식 아동의 청각적 인지변화)

  • Huh, Myung-Jin
    • MALSORI
    • /
    • no.64
    • /
    • pp.1-14
    • /
    • 2007
  • The purpose of this study was to examine the acoustic perception different by formants change for profoundly hearing impaired children with cochlear implants. The subjects were 10 children after 15 months of experience with the implant and mean of their chronological age was 8.4 years and Standard deviation was 2.9 years. The ability of auditory perception was assessed using acoustic-synthetic vowels. The acoustic-synthetic vowel was combined with F1, F2, and F3 into a vowel and produced 42 synthetic sound, using Speech GUI(Graphic User Interface) program. The data was deal with clustering analysis and on-line analytical processing for perception ability of acoustic synthetic vowel. The results showed that auditory perception scores of acoustic-synthetic vowels for cochlear implanted children were increased in F2 synthetic vowels compaire to those of F1. And it was found that they perceived the differences of vowels in terms of distance rates between F1 and F2 in specific vowel.

  • PDF

Sound Source Localization using Acoustically Shadowed Microphones (가려진 마이크로폰을 이용한 음원 위치 추적)

  • Lee, Hyeop-Woo;Yook, Dong-Suk
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.17-28
    • /
    • 2008
  • In many practical applications of robots, finding the location of an incoming sound is an important issue for the development of efficient human robot interface. Most sound source localization algorithms make use of only those microphones that are acoustically visible from the sound source or do not take into account the effect of sound diffraction, thereby degrading the sound source localization performance. This paper proposes a new sound source localization method that can utilize those microphones that are acoustically shadowed from the sound source. The experiment results show that use of the acoustically shadowed microphones, which receive higher signal-to-noise ratio signals than the others and are closer to the sound source, improves the performance of sound source localization.

  • PDF

A Study on home service robot interface in home environment (가정환경에서 홈 서비스로봇 인터페이스에 관한 연구)

  • Moon Yong-Seon;Kang Sung-ryul;Choi Hyeong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.9
    • /
    • pp.1710-1717
    • /
    • 2006
  • Present age the allotted span is increasing as becoming agining society gradually by scientific development and handicapped person who have native, acquired drag as long as is in mechanization of life culture is increasing. In this research to control robot through speech recognition controlling robot for handicapped person.

Development of Mobile Station in the CDMA Mobile System

  • Kim, Sun-Young;Uh, Yoon;Kweon, Hye-Yeoun;Lee, Hyuck-Jae
    • ETRI Journal
    • /
    • v.19 no.3
    • /
    • pp.202-227
    • /
    • 1997
  • This paper describes the development of the CDMA mobile station to support non-speech, mobile office services such as data, fax, and short message service in addition to voice. We developed some important functions of layer 2 and layer 3. To provide non-speech services, we developed a terminal adapter and user interface software. The description of development process, software architecture and external interfaces required to provide such services is given. The description of a TTA-62 message analysis tool, a mobile station monitoring software, and an automatic test system developed for integration tests and performance measurements is also given.

  • PDF