• Title/Summary/Keyword: Speech recognition robot

Search Result 71, Processing Time 0.033 seconds

Development of robotic hands of signbot, advanced Malaysian sign-language performing robot

  • Al-Khulaidi, Rami Ali;Akmeliawati, Rini;Azlan, Norsinnira Zainul;Bakr, Nuril Hana Abu;Fauzi, Norfatehah M.
    • Advances in robotics research
    • /
    • v.2 no.3
    • /
    • pp.183-199
    • /
    • 2018
  • This paper presents the development of a 3D printed humanoid robotic hands of SignBot, which can perform Malaysian Sign Language (MSL). The study is considered as the first attempt to ease the means of communication between the general community and the hearing-impaired individuals in Malaysia. The signed motions performed by the developed robot in this work can be done by two hands. The designed system, unlike previously conducted work, includes a speech recognition system that can feasibly integrate with the controlling platform of the robot. Furthermore, the design of the system takes into account the grammar of the MSL which differs from that of Malay spoken language. This reduces the redundancy and makes the design more efficient and effective. The robot hands are built with detailed finger joints. Micro servo motors, controlled by Arduino Mega, are also loaded to actuate the relevant joints of selected alphabetical and numerical signs as well as phrases for emergency contexts from MSL. A database for the selected signs is developed wherein the sequential movements of the servo motor arrays are stored. The results showed that the system performed well as the selected signs can be understood by hearing-impaired individuals.

Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot (감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식)

  • Kim, Eun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.755-759
    • /
    • 2009
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Especially, speaker-independent emotion recognition is a challenging issue for commercial use of speech emotion recognition systems. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition by rejection using confidence measure to make the emotion recognition system be homogeneous and accurate. From comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.

Korean Continuous Speech Recognition Using Discrete Duration Control Continuous HMM (이산 지속시간제어 연속분포 HMM을 이용한 연속 음성 인식)

  • Lee, Jong-Jin;Kim, Soo-Hoon;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.81-89
    • /
    • 1995
  • In this paper, we report the continuous speech recognition system using the continuous HMM with discrete duration control and the regression coefficients. Also, we do recognition experiment using One Pass DP method(for 25 sentences of robot control commands) with finite state automata context control. In the experiment for 4 connected spoken digits, the recognition rates are $93.8\%$ when the discrete duration control and the regression coefficients are included, and $80.7\%$ when they are not included. In the experiment for 25 sentences of the robot control commands, the recognition rate are $90.9\%$ when FSN is not included and $98.4\%$ when FSN is included.

  • PDF

Therapeutic Robot Action Design for ASD Children Using Speech Data (음성 정보를 이용한 자폐아 치료용 로봇의 동작 설계)

  • Lee, Jin-Gyu;Lee, Bo-Hee
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1123-1130
    • /
    • 2018
  • A cat robot for the Autism Spectrum Disorders(ASD) treatment was designed and conducted field test. The designed robot had emotion expressing action through interaction by the touch, and performed a reasonable emotional expression based on Artificial Neural Network(ANN). However these operations were difficult to use in the various healing activities. In this paper, we describe a motion design that can be used in a variety of contexts and flexibly reaction with various kinds of situations. As a necessary element, the speech recognition system using the speech data collection method and ANN was suggested and the classification results were analyzed after experiment. This ANN will be improved through collecting various voice data to raise the accuracy in the future and checked the effectiveness through field test.

A Development of CDMA-based Robot Remote Controller (CDMA 기반 로봇 원격제어기 개발)

  • Kim, Woo-Sik;Kim, Eung-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.345-347
    • /
    • 2006
  • In this paper, we study the robot controller design using the voice and data communication via CDMA(Code Division Multiple Access) mobile communication network. We design the robot remote controller using the three methods, telephone call speech recognition, DTMF (Dual Tone Multiple Frequency) realization, SMS(Short Message Service) transmission/reception way via CDMA mobile communication network. We investigate the validity and effectiveness of the proposed remote controller which applied to the mobile robot.

  • PDF

Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data (음향 데이터로부터 얻은 확장된 음소 단위를 이용한 한국어 자유발화 음성인식기의 성능)

  • Bang, Jeong-Uk;Kim, Sang-Hun;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • We propose a method to improve the performance of spontaneous speech recognizers by extending their phone set using speech data. In the proposed method, we first extract variable-length phoneme-level segments from broadcast speech signals, and convert them to fixed-length latent vectors using an long short-term memory (LSTM) classifier. We then cluster acoustically similar latent vectors and build a new phone set by choosing the number of clusters with the lowest Davies-Bouldin index. We also update the lexicon of the speech recognizer by choosing the pronunciation sequence of each word with the highest conditional probability. In order to analyze the acoustic characteristics of the new phone set, we visualize its spectral patterns and segment duration. Through speech recognition experiments using a larger training data set than our own previous work, we confirm that the new phone set yields better performance than the conventional phoneme-based and grapheme-based units in both spontaneous speech recognition and read speech recognition.

Reliable Sound Source Localization for Human Robot Interaction

  • Kim, Hyun-Don;Choi, Jong-Suk;Lee, Chang-Hoon;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1820-1825
    • /
    • 2004
  • In this paper, we propose a humanoid active audition system which detects the direction of sound and performs speech recognition using just three microphones. Compared with previous researches, this system comprises simpler algorithm and better amplifier system having advantages to increase a detectible distance of sound signal in spite of simple circuit. In order to verify our system's performance, we install the proposed active audition system to the home service robot, called Hombot II, which has been developed at the KIST (Korea Institute of Science and Technology), thus we confirm excellent performance by experimental results

  • PDF

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

Sound's Direction Detection and Speech Recognition System for Humanoid Active Audition

  • Kim, Hyun-Don;Choi, Jong-Suk;Lee, Chang-Hoon;Park, Gwi-Tea;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.633-638
    • /
    • 2003
  • In this paper, we propose a humanoid active audition system which detects the direction of sound and performs speech recognition using just three microphones. Compared with previous researches, this system which has simpler algorithm, fewer microphones and better amplifier shows better performance. In order to verify our system's performance, we install the proposed active audition system to the home service robot, called Hombot II, which has been developed at the KIST (Korea Institute of Science and Technology), thus we confirm excellent performance by experimental results

  • PDF

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF