• 제목/요약/키워드: Speech Interface

검색결과 251건 처리시간 0.022초

자동차 텔레매틱스용 내장형 음성 HMI시스템 (The Human-Machine Interface System with the Embedded Speech recognition for the telematics of the automobiles)

  • 권오일
    • 전자공학회논문지CI
    • /
    • 제41권2호
    • /
    • pp.1-8
    • /
    • 2004
  • 자동차 텔레매틱스 용 음성 HMI(Human Machine Interface) 기술은 차량 내 음성정보기술 활용을 위하여 차량 잡음환경에 강인한 내장형 음성 기술을 통합한 음성 HMI 기반 텔레매틱스 용 DSP 시스템의 개발을 포함한다. 개발된 내장형 음성 인식엔진을 바탕으로 통합 시험을 위한 자동차 텔레매틱스 용 DSP 시스템 구현 개발을 수행하는 본 논문은 자동차용 음성 HMI의 요소 기술을 통합하는 기술 개발로 자동차용 음성 HMI 기술 개발에 중심이 되는 연구이다.

An Experimental Study on Barging-In Effects for Speech Recognition Using Three Telephone Interface Boards

  • Park, Sung-Joon;Kim, Ho-Kyoung;Koo, Myoung-Wan
    • 음성과학
    • /
    • 제8권1호
    • /
    • pp.159-165
    • /
    • 2001
  • In this paper, we make an experiment on speech recognition systems with barging-in and non-barging-in utterances. Barging-in capability, with which we can say voice commands while voice announcement is coming out, is one of the important elements for practical speech recognition systems. Barging-in capability can be realized by echo cancellation techniques based on the LMS (least-mean-square) algorithm. We use three kinds of telephone interface boards with barging-in capability, which are respectively made by Dialogic Company, Natural MicroSystems Company and Korea Telecom. Speech database was made using these three kinds of boards. We make a comparative recognition experiment with this speech database.

  • PDF

성대특성 보간에 의한 합성음의 음질향상 - 음성코퍼스 내 개구간 비 보간을 위한 기초연구 - (Synthetic Speech Quality Improvement By Glottal parameter Interpolation - Preliminary study on open quotient interpolation in the speech corpus -)

  • 배재현;오영환
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.63-66
    • /
    • 2005
  • For the Large Corpus based TTS the consistency of the speech corpus is very important. It is because the inconsistency of the speech quality in the corpus may result in a distortion at the concatenation point. And because of this inconsistency, large corpus must be tuned repeatedly One of the reasons for the inconsistency of the speech corpus is the different glottal characteristics of the speech sentence in the corpus. In this paper, we adjusted the glottal characteristics of the speech in the corpus to prevent this distortion. And the experimental results are showed.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • 대한임베디드공학회논문지
    • /
    • 제16권3호
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.

모의 지능로봇에서 음성신호에 의한 감정인식 (Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot)

  • 장광동;권오욱
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

휴대폰용 멀티모달 인터페이스 개발 - 키패드, 모션, 음성인식을 결합한 멀티모달 인터페이스 (Development of a multimodal interface for mobile phones)

  • 김원우
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.559-563
    • /
    • 2008
  • 휴대폰은 현대 생활에 없어서는 안 될 개인화 단말기가 되었으며, 그 위에서 다양한 디바이스, 컨텐츠 및 서비스의 컨버전스가 이루어지고 있다. 그러한 다양하고 복잡한 기능과 대용량 컨텐츠 및 정보를 효과적으로 검색하고 사용할 수 있는 수단에 대한 연구도 활발히 진행되고 있다. 본 연구는 휴대폰 상에서 음성, 키패드, 모션을 이용하여 한글 단어를 입력하는 새로운 인터페이스를 개발하고, 이를 응용한 전화걸기 애플리케이션을 통하여 그 그사용성과 효과를 검증하는 것을 목적으로 한다. 개발된 멀티모달 인터페이스는 복잡한 메뉴 트리와 깊이를 한 번에 접근할 수 있는 음성 인터페이스의 장점을 수용하면서 인식률 및 인식시간을 개선하였다.

  • PDF

모의 지능로봇에서의 음성 감정인식 (Speech Emotion Recognition on a Simulated Intelligent Robot)

  • 장광동;김남;권오욱
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

A New Pruning Method for Synthesis Database Reduction Using Weighted Vector Quantization

  • Kim, Sanghun;Lee, Youngjik;Keikichi Hirose
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권4E호
    • /
    • pp.31-38
    • /
    • 2001
  • A large-scale synthesis database for a unit selection based synthesis method usually retains redundant synthesis unit instances, which are useless to the synthetic speech quality. In this paper, to eliminate those instances from the synthesis database, we proposed a new pruning method called weighted vector quantization (WVQ). The WVQ reflects relative importance of each synthesis unit instance when clustering the similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through the objective and subjective evaluations of the synthetic speech quality: one to simply limit maximum number of instance, and the other based on normal VQ-based clustering. The proposed method showed the best performance under 50% reduction rates. Over 50% of reduction rates, the synthetic speech quality is not seriously but perceptibly degraded. Using the proposed method, the synthesis database can be efficiently reduced without serious degradation of the synthetic speech quality.

  • PDF

생성적 적대 신경망을 이용한 음향 도플러 기반 무 음성 대화기술 (An acoustic Doppler-based silent speech interface technology using generative adversarial networks)

  • 이기승
    • 한국음향학회지
    • /
    • 제40권2호
    • /
    • pp.161-168
    • /
    • 2021
  • 본 논문에서는 발성하고 있는 입 주변에 40 kHz의 주파수를 갖는 초음파 신호를 방사하고 되돌아오는 신호의 도플러 변이를 검출하여 발성음을 합성하는 무 음성 대화기술을 제안하였다. 무음성 대화 기술에서는 비 음성 신호로 부터 추출된 특징변수와 해당 음성 신호의 파라메터 간 대응 규칙을 생성하고 이를 이용하여 음성신호를 합성하게 된다. 기존의 무 음성 대화기술에서는 추정된 음성 파라메터와 실제 음성 파라메터간의 오차가 최소화되도록 대응규칙을 생성한다. 본 연구에서는 추정 음성 파라메터가 실제 음성 파라메터의 분포와 유사하도록 생성적 적대 신경망을 도입하여 대응 규칙을 생성하도록 하였다. 60개 한국어 음성을 대상으로 한 실험에서 제안된 기법은 객관적, 주관적 지표상 으로 기존의 신경망 기반 기법보다 우수한 성능을 나타내었다.