• Title/Summary/Keyword: Speech Interface

Search Result 251, Processing Time 0.026 seconds

Language Specific Variations of Domain-initial Strengthening and its Implications on the Phonology-Phonetics Interface: with Particular Reference to English and Hamkyeong Korean

  • Kim, Sung-A
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.7-21
    • /
    • 2004
  • The present study aims to investigate domain-initial strengthening phenomenon, which refers to strengthening of articulatory gestures at the initial positions of prosodic domains. More specifically, this paper presents the result of an experimental study of initial syllables with onset consonants (initial-syllable vowels henceforth) of various prosodic domains in English and Hamkyeong Korean, a pitch accent dialect spoken in the northern part of North Korea. The durations of initial-syllable vowels are compared to those of second vowels in real-word tokens for both languages, controlling both stress and segmental environment. Hamkyeong Korean, like English, tuned out to strengthen the domain-initial consonants. With regard to vowel durations, no significant prosodic effect was found in English. On the other hand, Hamkyeong Korean showed significant differences between the durations of initial and non-initial vowels in the higher prosodic domains. The theoretical implications of the findings are as follows: The potentially universal phenomenon of initial strengthening is shown to be subject to language specific variations in its implementation. More importantly, the distinct phonetics- phonology model (Pierrehumbert & Beckman, 1998; Keating, 1990; Cohn, 1993) is better equipped to account for the facts in the present study.

  • PDF

A Study on Embedded DSP Implementation of Keyword-Spotting System using Call-Command (호출 명령어 방식 핵심어 검출 시스템의 임베디드 DSP 구현에 관한 연구)

  • Song, Ki-Chang;Kang, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.9
    • /
    • pp.1322-1328
    • /
    • 2010
  • Recently, keyword spotting system is greatly in the limelight as UI(User Interface) technology of ubiquitous home network system. Keyword spotting system is vulnerable to non-stationary noises such as TV, radio, dialogue. Especially, speech recognition rate goes down drastically under the embedded DSP(Digital Signal Processor) environments because it is relatively low in the computational capability to process input speech in real-time. In this paper, we propose a new keyword spotting system using the call-command method, which is consisted of small number of recognition networks. We select the call-command such as 'narae', 'home manager' and compose the small network as a token which is consisted of silence with the noise and call commands to carry the real-time recognition continuously for input speeches.

A Multimodal Interface for Telematics based on Multimodal middleware (미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스)

  • Park, Sung-Chan;Ahn, Se-Yeol;Park, Seong-Soo;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF

Interactive Game Designed for Early Child using Multimedia Interface : Physical Activities (멀티미디어 인터페이스 기술을 이용한 유아 대상의 체감형 게임 설계 : 신체 놀이 활동 중심)

  • Won, Hye-Min;Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.116-127
    • /
    • 2011
  • This paper proposes interactive game elements for children : contents, design, sound, gesture recognition, and speech recognition. Interactive games for early children must use the contents which reflect the educational needs and the design elements which are all bright, friendly, and simple to use. Also the games should consider the background music which is familiar with children and the narration which make easy to play the games. In gesture recognition and speech recognition, the interactive games must use gesture and voice data which hits to the age of the game user. Also, this paper introduces the development process for the interactive skipping game and applies the child-oriented contents, gestures, and voices to the game.

An Automatic Post-processing Method for Speech Recognition using CRFs and TBL (CRFs와 TBL을 이용한 자동화된 음성인식 후처리 방법)

  • Seon, Choong-Nyoung;Jeong, Hyoung-Il;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.9
    • /
    • pp.706-711
    • /
    • 2010
  • In the applications of a human speech interface, reducing the error rate in recognition is the one of the main research issues. Many previous studies attempted to correct errors using post-processing, which is dependent on a manually constructed corpus and correction patterns. We propose an automatically learnable post-processing method that is independent of the characteristics of both the domain and the speech recognizer. We divide the entire post-processing task into two steps: error detection and error correction. We consider the error detection step as a classification problem for which we apply the conditional random fields (CRFs) classifier. Furthermore, we apply transformation-based learning (TBL) to the error correction step. Our experimental results indicate that the proposed method corrects a speech recognizer's insertion, deletion, and substitution errors by 25.85%, 3.57%, and 7.42%, respectively.

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

Estimating speech parameters for ultrasonic Doppler signal using LSTM recurrent neural networks (LSTM 순환 신경망을 이용한 초음파 도플러 신호의 음성 패러미터 추정)

  • Joo, Hyeong-Kil;Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.433-441
    • /
    • 2019
  • In this paper, a method of estimating speech parameters for ultrasonic Doppler signals reflected from the articulatory muscles using LSTM (Long Short Term Memory) RNN (Recurrent Neural Networks) was introduced and compared with the method using MLP (Multi-Layer Perceptrons). LSTM RNN were used to estimate the Fourier transform coefficients of speech signals from the ultrasonic Doppler signals. The log energy value of the Mel frequency band and the Fourier transform coefficients, which were extracted respectively from the ultrasonic Doppler signal and the speech signal, were used as the input and reference for training LSTM RNN. The performance of LSTM RNN and MLP was evaluated and compared by experiments using test data, and the RMSE (Root Mean Squared Error) was used as a measure. The RMSE of each experiment was 0.5810 and 0.7380, respectively. The difference was about 0.1570, so that it confirmed that the performance of the method using the LSTM RNN was better.

Spontaneous Speech Translation System Development (대화체 음성언어 번역 시스템 개발)

  • Park, Jun;Lee, Young-jik;Yang, Jae-woo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.281-286
    • /
    • 1998
  • ETRI에서 개발 중인 대화체 음성언어번역 시스템에 대하여 기술한다. 현재, ETRI는 DAM성언어번역 국제 공동 연구콘서시움인 C-STAR에 핵심참가기관으로 참여하여, 한일, 한영음성언어번역 시스템을 개발하고 있으며 1999년 국제 공동시험을 계획하고 이?. 최근의 연구 진행상황을 간추리면, 먼저 음성인식분야에서 유무성음 및 묵음정보를 미리 추출하여 이를 탐색에 활용하였으며, 음향모델 규모의 설정을 위한 교차 엔트로피 기반 변이음 군집화 알고리즘이 구현되었다. 또한 대상어휘의 확장을 위하여 의사형태소의 개념을 도입하였다. 언어번역분야에서는 이전과 같은 개념기반의 번역을 시도하고 있으며, C-STAR 회원기관과 공동으로 중간언어 규격을 정의하고 있다. 음성합성분야에서는 훈련형 합성기를 개발하여 합성데이타베이스 구축기간을 현저하게 줄였다.

  • PDF

Expected Matching Score Based Document Expansion for Fast Spoken Document Retrieval (고속 음성 문서 검색을 위한 Expected Matching Score 기반의 문서 확장 기법)

  • Seo, Min-Koo;Jung, Gue-Jun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.71-74
    • /
    • 2006
  • Many works have been done in the field of retrieving audio segments that contain human speeches without captions. To retrieve newly coined words and proper nouns, subwords were commonly used as indexing units in conjunction with query or document expansion. Among them, document expansion with subwords has serious drawback of large computation overhead. Therefore, in this paper, we propose Expected Matching Score based document expansion that effectively reduces computational overhead without much loss in retrieval precisions. Experiments have shown 13.9 times of speed up at the loss of 0.2% in the retrieval precision.

  • PDF

Hand-Gesture Dialing System for Safe Driving (안전성 확보를 위한 손동작 전화 다이얼링 시스템)

  • Jang, Won-Ang;Kim, Jun-Ho;Lee, Do Hoon;Kim, Min-Jung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4801-4806
    • /
    • 2012
  • There are still problems have to solve for safety of driving comparing to the upgraded convenience of advanced vehicle. Most traffic accident is by uncareful driving cause of interface operations which are directive reasons of it in controlling the complicate multimedia device. According to interesting in smart automobile, various approaches for safe driving have been studied. The current multimedia interface embedded in vehicle is lacking the safety due to loss the sense and operation capacity by instantaneous view movement. In this paper, we propose a safe dialing system for safe driving to control dial and search dictionary by hand-gesture. The proposed system improved the user convenience and safety in automobile operation using intuitive gesture and TTS(Text to Speech).