• Title/Summary/Keyword: Command Sound

Search Result 25, Processing Time 0.032 seconds

Voice Command-based Prediction and Follow of Human Path of Mobile Robots in AI Space

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_1
    • /
    • pp.225-230
    • /
    • 2023
  • This research addresses sound command based human tracking problems for autonomous cleaning mobile robot in a networked AI space. To solve the problem, the difference among the traveling times of the sound command to each of three microphones has been used to calculate the distance and orientation of the sound from the cleaning mobile robot, which carries the microphone array. The cross-correlation between two signals has been applied for detecting the time difference between two signals, which provides reliable and precise value of the time difference compared to the conventional methods. To generate the tracking direction to the sound command, fuzzy rules are applied and the results are used to control the cleaning mobile robot in a real-time. Finally the experiment results show that the proposed algorithm works well, even though the mobile robot knows little about the environment.

Development of Stereo Sound Authoring Tool to Modify and Edit 2Channel Stereo Sound Source Using HRTF (HRTF를 이용한 2채널 스테레오 음원을 수정 및 편집 할 수 있는 입체음향 저작도구 개발)

  • Kim, Young-Sik;Kim, Yong-Il;Bae, Myeong-Soo;Jeon, Su-Min;Lee, Dae-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.909-912
    • /
    • 2017
  • In implementing a computerized virtual training system, the auditory element is responsible for the human cognitive ability following visual elements. Especially, the improvement of hearing ability is closely related to the performance of the training, and it contributes to improvement of the training effect. In this paper, we propose a sound system that is necessary for constructing such a virtual training system as a test system that can use a sound source using a head related transfer function (HRTF). Functional and auditory tests were performed to evaluate system performance.

The Study of Sound Effect Improved Simulation though Wavelet analysis and Fourier transform (Wavelet 분석을 통한 시뮬레이션 음향 효과 개선에 관한 연구)

  • Kim, Young-Sik;Kim, Yong-Il;Bae, Myeong-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.960-962
    • /
    • 2017
  • This thesis suggests method that How sound sources used to simulation that can be used to military training and education divide each frequency and each bandwidth filtering method. method for frequency dividing and denoising are suggested into Wavelet analysis. And We materialize authoring tool about filtering that design for wavelet job.

A HMM-based Method of Reducing the Time for Processing Sound Commands in Computer Games (컴퓨터 게임에서 HMM 기반의 명령어 신호 처리 시간 단축을 위한 방법)

  • Park, Dosaeng;Kim, Sangchul
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.119-128
    • /
    • 2016
  • In computer games, most of GUI methods are keyboards, mouses and touch screens. The total time of processing the sound commands for games is the sum of input time and recognition time. In this paper, we propose a method for taking only the prefixes of the input signals for sound commands, resulting in the reduced the total processing time, instead of taking the whole input signals. In our method, command sounds are recognized using HMM(Hidden Markov Model), where separate HMM's are built for the whole input signals and their prefix signals. We experiment our proposed method with representative commands of platform games. The experiment shows that the total processing time of input command signals reduces without decreasing recognition rate significantly. The study will contribute to enhance the versatility of GUI for computer games.

Speech Recognition Interface in the Communication Environment (통신환경에서 음성인식 인터페이스)

  • Han, Tai-Kun;Kim, Jong-Keun;Lee, Dong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2610-2612
    • /
    • 2001
  • This study examines the recognition of the user's sound command based on speech recognition and natural language processing, and develops the natural language interface agent which can analyze the recognized command. The natural language interface agent consists of speech recognizer and semantic interpreter. Speech recognizer understands speech command and transforms the command into character strings. Semantic interpreter analyzes the character strings and creates the commands and questions to be transferred into the application program. We also consider the problems, related to the speech recognizer and the semantic interpreter, such as the ambiguity of natural language and the ambiguity and the errors from speech recognizer. This kind of natural language interface agent can be applied to the telephony environment involving all kind of communication media such as telephone, fax, e-mail, and so on.

  • PDF

Sound Improvement of Violin Playing Robot Applying Auditory Feedback

  • Jo, Wonse;Yura, Jargalbaatar;Kim, Donghan
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.6
    • /
    • pp.2378-2387
    • /
    • 2017
  • Violinists learn to make better sounds by hearing and evaluating their own playing though numerous practice. This study proposes a new method of auditory feedback, which mimics this violinists' step and verifies its efficiency using experiments. Making the desired sound quality of a violin is difficult without auditory feedback even though an expert violinist plays. An algorithm for controlling a robot arm of violin playing robot is determined based on correlations with bowing speed, bowing force, and sound point that determine the sound quality of a violin. The bowing speed is estimated by the control command of the robot arm, where the bowing force and the sound point are recognized by using a two-axis load cell and a photo interrupter, respectively. To improve the sound quality of a violin playing robot, the sounds information is obtained by auditory feedback system applied Short Time Fourier Transform (STFT) to the sounds from a violin. This study suggests Gaussian-Harmonic-Quality (GHQ) uses sounds' clarity, accuracy, and harmonic structure in order to decide sound quality, objectively. Through the experiments, the auditory feedback system improved the performance quality by the robot accordingly, changing the bowing speed, bowing force, and sound point and determining the quality of robot sounds by GHQ sound quality evaluation system.

Implementation of Stereophonic Sound System Using Multiple Smartphones (여러 대의 스마트폰을 이용한 입체 음향 시스템 구현)

  • Kim, Ki-Jun;Myeong, Chang-Ho;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.810-818
    • /
    • 2014
  • In this paper, we propose a stereophonic sound system using multiple smartphones. In the conventional sound systems using smartphones, all devices play the same signal so that it is difficult to provide true stereophonic effect. In order to solve this problem, we propose a novel sound system which can generate a virtual sound source at any location in such a way that smartphones at different locations play different signals with amplitude panning. By using the proposed system, we can generate more realistic stereophonic effect than the conventional system, and can control the sound effect by user's command. We developed the proposed system using commercial smartphones and verified that the developed sound system effectively provides the desired stereophonic effect.

A Study on Stable Motion Control of Humanoid Robot with 24 Joints Based on Voice Command

  • Lee, Woo-Song;Kim, Min-Seong;Bae, Ho-Young;Jung, Yang-Keun;Jung, Young-Hwa;Shin, Gi-Soo;Park, In-Man;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.1
    • /
    • pp.17-27
    • /
    • 2018
  • We propose a new approach to control a biped robot motion based on iterative learning of voice command for the implementation of smart factory. The real-time processing of speech signal is very important for high-speed and precise automatic voice recognition technology. Recently, voice recognition is being used for intelligent robot control, artificial life, wireless communication and IoT application. In order to extract valuable information from the speech signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. Basic method used for extracting the features of the voice signal is to find the Mel frequency cepstral coefficients. Mel-frequency cepstral coefficients are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The reliability of voice command to control of the biped robot's motion is illustrated by computer simulation and experiment for biped walking robot with 24 joint.

A study on the voice command recognition at the motion control in the industrial robot (산업용 로보트의 동작제어 명령어의 인식에 관한 연구)

  • 이순요;권규식;김홍태
    • Journal of the Ergonomics Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.3-10
    • /
    • 1991
  • The teach pendant and keyboard have been used as an input device of control command in human-robot sustem. But, many problems occur in case that the usef is a novice. So, speech recognition system is required to communicate between a human and the robot. In this study, Korean voice commands, eitht robot commands, and ten digits based on the broad phonetic analysis are described. Applying broad phonetic analysis, phonemes of voice commands are divided into phoneme groups, such as plosive, fricative, affricative, nasal, and glide sound, having similar features. And then, the feature parameters and their ranges to detect phoneme groups are found by minimax method. Classification rules are consisted of combination of the feature parameters, such as zero corssing rate(ZCR), log engery(LE), up and down(UD), formant frequency, and their ranges. Voice commands were recognized by the classification rules. The recognition rate was over 90 percent in this experiment. Also, this experiment showed that the recognition rate about digits was better than that about robot commands.

  • PDF

Speaker Tracking System for Autonomous Mobile Robot (자율형 이동로봇을 위한 전방위 화자 추종 시스템)

  • Lee, Chang-Hoon;Kim, Yong-Hoh
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.142-145
    • /
    • 2002
  • This paper describes a omni-directionally speaker tracking system for mobile robot interface in real environment. Its purpose is to detect a robust 360-degree sound source and to recognize voice command at a long distance(60-300cm). We consider spatial features, the relation of position and interaural time differences, and realize speaker tracking system using fuzzy inference process based on inference rules generated by its spatial features.

  • PDF