• Title/Summary/Keyword: Speech Interface

Search Result 251, Processing Time 0.025 seconds

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Voice Portal based on SMS Authentication at CTI Module Implementation by Speech Recognition (SMS 인증 기반의 보이스포탈에서의 음성인식을 위한 CTI 모듈 구현)

  • Oh, Se-Il;Kim, Bong-Hyun;Koh, Jin-Hwan;Park, Won-Tea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.1177-1180
    • /
    • 2001
  • 전화를 통해 인터넷 정보를 들을 수 있는 보이스 포탈(Voice Portal) 서비스가 인기를 얻고 있다. Voice Portal 서비스란 알고자 하는 정보를 Speech Recognition System에 음성으로 명령하면 전화를 통해 음성으로 원하는 정보를 듣는 서비스이다. Authentication의 절차를 수행하는 SMS (Short Message Service) 서버 Module, PSTN과 Database 서버사이의 Interface를 제공하는 CTI (Computer Telephony Integration) Module, CTI 서버와 WWW (World Wide Web) 사이의 Voice XML Module, 정보를 검색하기 위한 Searching Module들이 필요하다. 본 논문은 Speech Recognition technology를 기반으로 한 CTI Module 설계를 구현하였다. 또한 인정 방식으로 Random한 일회용 password를 기반으로 한 SMS Authentication을 택하므로 더욱 더 안정된 서비스 제공을 목적으로 하였다.

  • PDF

Development of an Autonomous Mobile Robot with Functions of Speech Recognition and Collision Avoidance

  • Park, Min-Gyu;Lee, Min-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.475-475
    • /
    • 2000
  • This paper describes the construction of an autonomous mobile robot with functions of collision avoidance and speech recognition that is used for teaching path of the robot. The human voice as a teaching method provides more convenient user-interface to mobile robot. For safe navigation, the autonomous mobile robot needs abilities to recognize surrounding environment and avoid collision. We use u1trasonic sensors to obtain the distance from the mobile robot to the various obstacles. By navigation algorithm, the robot forecasts the possibility of collision with obstacles and modifies a path if it detects dangerous obstacles. For these functions, the robot system is composed of four separated control modules, which are a speech recognition module, a servo motor control module, an ultrasonic sensor module, and a main control module. These modules are integrated by CAN(controller area network) in order to provide real-time communication.

  • PDF

Implementation of Packet Voice Protocol (패킷음성 프로토콜의 구현)

  • 이상길;신병철;김윤관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.12
    • /
    • pp.1841-1854
    • /
    • 1993
  • In this paper, the packet voice protocol for the transmission of voice signal onto ethernet is implemented in a personal computer (PC). The packet voice protocol used is a modified one from CCITT G.764 packetized voice protocol. The hardware system to facilitate the voice communication onto ethernet is divided into telephone interface, speech processing, PC interface and controllers. The software structure of the protocol is designed according to the OSI seven layer architecture and is divided into three routines : ethernet device driver, telephone interface, and processing routine of the packet voice protocol. Experiments through ethernet with telephone interface show that this packet voice communication achieves satisfactory quality when the network traffic is light.

  • PDF

Home Network Control System using SMS Dialog Interface (SMS를 통한 홈네트워크 제어 시스템)

  • Chang, Du-Seong;Kim, Hyun-Jeong;Eun, Ji-Hyun;Kang, Seung-Shik;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.330-333
    • /
    • 2007
  • This paper presents a dialogue interface using the dialogue management system as a method for controlling home appliances in Home Network Services. In order to realize this type of dialogue interface, we annotated 96,000 utterance pair sized dialogue set and developed an example-based dialogue system. This paper introduces the automatic error correction module for the SMS-styled sentence. With this module we increase the accuracy of NLU(Natural Language Understanding) module. Our NLU module shows an accuracy of 86.2%, which is an improvement of 5.25% over than the baseline. The task completeness of the proposed SMS dialogue interface was 82%.

  • PDF

A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction (인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’)

  • Hwang, Jung-Hoon;Kwon, Dong-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

Development of Text-to-Speech System for PC (PC용 Text-to-Speech 시스템 개발)

  • Choi Muyeol;Hwang Cholgyu;Kim Soontae;Kim Junggon;Yi Sopae;Jang Seokbok;Pyo Kyungnan;Ahn Hyesun;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.41-44
    • /
    • 1999
  • 본 논문에서는 PC 응용을 위한 고음질의 한국어 text-to-speech(TTS) 합성 시스템을 개발하였다. 개발된 시스템의 합성방식으로는 음의 고저 조절, 인접음 사이의 연결 처리 및 음색제어 등에서 기존의 PSOLA 방식에 비해 장점을 가지는 정현파 모델 기반의 방식을 채택하였고, 자연스러운 운율 모델링을 위하여 통계적 기법중의 하나인 Classification and regression tree(CART) 방법을 사용하였다. 또한 음소 경계의 불연속성 문제를 줄이기 위한 합성단위로 초성-중성 및 종성 단위를 사용하였고, 다양한 음색표현이 가능하도록 음색제어 기능을 갖추었다. 그리고, 표준 Speech Application Program Interface(SAPI)를 준용한 TTS engine 형태로 구현함으로써 PC 상에서의 응용 프로그램 개발 편의성을 높였다. 합성음의 청취평가 결과 음질의 우수성 및 음색제어 기능의 유효성을 확인할 수 있었다.

  • PDF