• Title/Summary/Keyword: human audio voice

Search Result 16, Processing Time 0.019 seconds

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

A Study on Audio/Voice Color Processing Technique (오디오/음성 컬러 처리 기술 연구)

  • Kim Kwangki;Kim Sang-Jin;Son BeakKwon;Hahn Minsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.153-156
    • /
    • 2003
  • In this paper, we studied advanced audio/ voice information processing techniques, and trying to introduce more human friendly audio/voice. It is just in the beginning stage. Firstly, we approached in well-known time-domain methods such as moving average, differentiation, interpolation, and decimation. Moreover, some variation of them and envelope contour modification are utilized. We also suggested the MOS test to evaluate subjective listening factors. In the long term viewpoint, user's preference, mood, and environmental conditions will be considered and according to them, we hope our future technique can adapt speech and audio signals automatically.

  • PDF

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Audio Transformation Filter for Multimedia I/O Server (멀티미디어 입출력 서버를 위한 오디오 변환 필터)

  • Cho, Byoung-Ho;Jang, Yu-Tak;Kim, Woo-Jin;Kim, Ki-Jong;Yoo, Ki-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.6
    • /
    • pp.580-587
    • /
    • 2000
  • In this paper, we present a design method of a digital filter converting humming voice melody into MIDI data and a method of adapting it to a distributed multimedia I/O server. MuX uses device-independent DLMs(Dynamic Linking Module) for the inteliace with various I/O devices, and has wave-form audio DLM and MIDI DLM for audio interfaces. In order to expand the audio device interfacing ability of MuX system, we have designed and implemented a filter transforming human voice into MIDI messages. As the methods to input MIDI data are expanded to human voice in addition to MIDI files and MIDI instrument, someone who is not good at playing instruments can also generate the MIDI data, which enables our media interfaces to be used in various applications.

  • PDF

Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface (차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점)

  • Kim, Kihyun;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.

Diagnosis of Parkinson's disease based on audio voice using wav2vec (Wav2vec을 이용한 오디오 음성 기반의 파킨슨병 진단)

  • Yoon, Hee-Jin
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.353-358
    • /
    • 2021
  • Parkinson's disease is the second most common degenerative brain disease after Alzheimer's in old age. Symptoms of Parkinson's disease are factors that reduce the quality of life in daily life, such as shaking hands, slowing behavior and cognitive function. Parkinson's disease that can slow the progression of the disease through early diagnosis. To diagnoze Parkinson's disease early, an algorithm was implemented to extract features using wav2vec and to diagnose the presence or absence of Parkinson's disease with deep learning(ANN). As a results of the experiment, the accuracy was 97.47%. It was better than the results of diagnosing Parkinson's disease using the existing neural network. The audio voice file could simply reduce the experiment process and obtain improved results.

Voice Activity Detection with Run-Ratio Parameter Derived from Runs Test Statistic

  • Oh, Kwang-Cheol
    • Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.95-105
    • /
    • 2003
  • This paper describes a new parameter for voice activity detection which serves as a front-end part for automatic speech recognition systems. The new parameter called run-ratio is derived from the runs test statistic which is used in the statistical test for randomness of a given sequence. The run-ratio parameter has the property that the values of the parameter for the random sequence are about 1. To apply the run-ratio parameter into the voice activity detection method, it is assumed that the samples of an inputted audio signal should be converted to binary sequences of positive and negative values. Then, the silence region in the audio signal can be regarded as random sequences so that their values of the run-ratio would be about 1. The run-ratio for the voiced region has far lower values than 1 and for fricative sounds higher values than 1. Therefore, the parameter can discriminate speech signals from the background sounds by using the newly derived run-ratio parameter. The proposed voice activity detector outperformed the conventional energy-based detector in the sense of error mean and variance, small deviation from true speech boundaries, and low chance of missing real utterances

  • PDF

Research on Machine Learning Rules for Extracting Audio Sources in Noise

  • Kyoung-ah Kwon
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.206-212
    • /
    • 2024
  • This study presents five selection rules for training algorithms to extract audio sources from noise. The five rules are Dynamics, Roots, Tonal Balance, Tonal-Noisy Balance, and Stereo Width, and the suitability of each rule for sound extraction was determined by spectrogram analysis using various types of sample sources, such as environmental sounds, musical instruments, human voice, as well as white, brown, and pink noise with sine waves. The training area of the algorithm includes both melody and beat, and with these rules, the algorithm is able to analyze which specific audio sources are contained in the given noise and extract them. The results of this study are expected to improve the accuracy of the algorithm in audio source extraction and enable automated sound clip selection, which will provide a new methodology for sound processing and audio source generation using noise.

Expected Matching Score Based Document Expansion for Fast Spoken Document Retrieval (고속 음성 문서 검색을 위한 Expected Matching Score 기반의 문서 확장 기법)

  • Seo, Min-Koo;Jung, Gue-Jun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.71-74
    • /
    • 2006
  • Many works have been done in the field of retrieving audio segments that contain human speeches without captions. To retrieve newly coined words and proper nouns, subwords were commonly used as indexing units in conjunction with query or document expansion. Among them, document expansion with subwords has serious drawback of large computation overhead. Therefore, in this paper, we propose Expected Matching Score based document expansion that effectively reduces computational overhead without much loss in retrieval precisions. Experiments have shown 13.9 times of speed up at the loss of 0.2% in the retrieval precision.

  • PDF

Data augmentation in voice spoofing problem (데이터 증강기법을 이용한 음성 위조 공격 탐지모형의 성능 향상에 대한 연구)

  • Choi, Hyo-Jung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.449-460
    • /
    • 2021
  • ASVspoof 2017 deals with detection of replay attacks and aims to classify real human voices and fake voices. The spoofed voice refers to the voice that reproduces the original voice by different types of microphones and speakers. data augmentation research on image data has been actively conducted, and several studies have been conducted to attempt data augmentation on voice. However, there are not many attempts to augment data for voice replay attacks, so this paper explores how audio modification through data augmentation techniques affects the detection of replay attacks. A total of 7 data augmentation techniques were applied, and among them, dynamic value change (DVC) and pitch techniques helped improve performance. DVC and pitch showed an improvement of about 8% of the base model EER, and DVC in particular showed noticeable improvement in accuracy in some environments among 57 replay configurations. The greatest increase was achieved in RC53, and DVC led to an approximately 45% improvement in base model accuracy. The high-end recording and playback devices that were previously difficult to detect were well identified. Based on this study, we found that the DVC and pitch data augmentation techniques are helpful in improving performance in the voice spoofing detection problem.