• Title/Summary/Keyword: 음성기반

Search Result 2,238, Processing Time 0.03 seconds

Building of an Intelligent Ship's Steering Control System Based on Voice Instruction Gear Using Fuzzy Inference (퍼지추론에 의한 지능형 음성지시 조타기 제어 시스템의 구축)

  • 서기열;박계각
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1809-1815
    • /
    • 2003
  • This paper presents a human friendly system using fuzzy inference as a Part of study to embody intelligent ship. We also build intelligent ship's steering system to take advantage of speech recognition that is a part of the human friendly interface. It can bring an effect such as labor decrement in ship. In order to design the voice instruction based ship's steering gear control system, we build of the voice instruction based learning(VIBL) system based on speech recognition and intelligent learning method at first. Next, we design an quartermaster's operation model by fuzzy inference and construct PC based remote control system. Finally, we applied the unposed control system to the miniature ship and verified its effectiveness.

Multi-level Skip Connection for Nested U-Net-based Speech Enhancement (중첩 U-Net 기반 음성 향상을 위한 다중 레벨 Skip Connection)

  • Seorim, Hwang;Joon, Byun;Junyeong, Heo;Jaebin, Cha;Youngcheol, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.840-847
    • /
    • 2022
  • In a deep neural network (DNN)-based speech enhancement, using global and local input speech information is closely related to model performance. Recently, a nested U-Net structure that utilizes global and local input data information using multi-scale has bee n proposed. This nested U-Net was also applied to speech enhancement and showed outstanding performance. However, a single skip connection used in nested U-Nets must be modified for the nested structure. In this paper, we propose a multi-level skip connection (MLS) to optimize the performance of the nested U-Net-based speech enhancement algorithm. As a result, the proposed MLS showed excellent performance improvement in various objective evaluation metrics compared to the standard skip connection, which means th at the MLS can optimize the performance of the nested U-Net-based speech enhancement algorithm. In addition, the final proposed m odel showed superior performance compared to other DNN-based speech enhancement models.

Implementation of Speaker Independent Speech Recognizer in Noise Environment based on DSP (DSP기반의 잡음환경에 강인한 화자 독립 음성 인식기 구현)

  • 박진영;권호민;박정원;김창근;허강인
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.69-72
    • /
    • 2003
  • 본 논문에서는 범용 DSP를 이용한 잡음환경에 강인한 음성인식 시스템을 구현하였다. 구현된 시스템은 TI사의 범용 DSP인 TMS320C32를 이용하였고, 실시간 음성 입력을 위한 음성 Codec과 외부 인터페이스를 확장하여 인식결과를 출력하도록 구성하였다. 또한, 기존의 음성 인식 시스템에 사용한 파라메터에 대한 고찰과 ICA를 이용하여 잡음 환경에 강인한 음성 특징 파라메터를 제안하고 성능 비교 실험을 하였다. 제안된 ICA 파라메터를 적용하여 음성인식 시스템을 구현하였다. 그리고, 독립적으로 동작 가능한 음성인식 시스템의 응용 예로 무선자동차에 적용시켜 실험했다.

  • PDF

Design of VoiceXML authoring tool for Voice Information Service (음성정보 서비스를 위한 VoiceXML 저작도구 설계)

  • 김성범;홍현술;한성국
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.172-174
    • /
    • 2002
  • 음성정보 기술의 발달은 음성 마크업 언어인 VoiceXML1.0의 등장으로 인하여 기존의 음성 정보 기술은 보다 많은 발전 기회를 가지게 되었다. 그러나 현재 음성정보 서비스를 위한 기술은 많이 발전된 반면 이를 위한 마크업 언어인 음성 마크업 언어의 활용에 대한 연구는 부족한 실정이다. 이에 본 논문에서는 음성연구의 한 분야의 VoiceXML의 필요성과 기반 기술에 대하여 알아보고. 이를 활용하여 음성 정보 서비스를 위해 설계 요구사항을 정립하여, 저작도구의 구성요소를 기능별로 설계하였고 프로토타입으로 검증하였다.

  • PDF

Development of medical/electrical convergence software for classification between normal and pathological voices (장애 음성 판별을 위한 의료/전자 융복합 소프트웨어 개발)

  • Moon, Ji-Hye;Lee, JiYeoun
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.187-192
    • /
    • 2015
  • If the software is developed to analyze the speech disorder, the application of various converged areas will be very high. This paper implements the user-friendly program based on CART(Classification and regression trees) analysis to distinguish between normal and pathological voices utilizing combination of the acoustical and HOS(Higher-order statistics) parameters. It means convergence between medical information and signal processing. Then the acoustical parameters are Jitter(%) and Shimmer(%). The proposed HOS parameters are means and variances of skewness(MOS and VOS) and kurtosis(MOK and VOK). Database consist of 53 normal and 173 pathological voices distributed by Kay Elemetrics. When the acoustical and proposed parameters together are used to generate the decision tree, the average accuracy is 83.11%. Finally, we developed a program with more user-friendly interface and frameworks.

Speech Recognition based on Environment Adaptation using SNR Mapping (SNR 매핑을 이용한 환경적응 기반 음성인식)

  • Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.5
    • /
    • pp.543-548
    • /
    • 2014
  • Multiple-model based speech recognition framework (MMSR) has been known to be very successful in speech recognition. Since it uses multiple hidden Markov modes (HMMs) that corresponds to various noise types and signal-to-noise ratio (SNR) values, the selected acoustic model can have a close match with the test noisy speech. However, since the number of HMM sets is limited in practical use, the acoustic mismatch still remains as a problem. In this study, we experimentally determined the optimal SNR mapping between the test noisy speech and the HMM set to mitigate the mismatch between them. Improved performance was obtained by employing the SNR mapping instead of using the estimated SNR from the test noisy speech. When we applied the proposed method to the MMSR, the experimental results on the Aurora 2 database show that the relative word error rate reduction of 6.3% and 9.4% was achieved compared to a conventional MMSR and multi-condition training (MTR), respectively.

PCMM-Based Feature Compensation Method Using Multiple Model to Cope with Time-Varying Noise (시변 잡음에 대처하기 위한 다중 모델을 이용한 PCMM 기반 특징 보상 기법)

  • 김우일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.473-480
    • /
    • 2004
  • In this paper we propose an effective feature compensation scheme based on the speech model in order to achieve robust speech recognition. The proposed feature compensation method is based on parallel combined mixture model (PCMM). The previous PCMM works require a highly sophisticated procedure for estimation of the combined mixture model in order to reflect the time-varying noisy conditions at every utterance. The proposed schemes can cope with the time-varying background noise by employing the interpolation method of the multiple mixture models. We apply the‘data-driven’method to PCMM tot move reliable model combination and introduce a frame-synched version for estimation of environments posteriori. In order to reduce the computational complexity due to multiple models, we propose a technique for mixture sharing. The statistically similar Gaussian components are selected and the smoothed versions are generated for sharing. The performance is examined over Aurora 2.0 and speech corpus recorded while car-driving. The experimental results indicate that the proposed schemes are effective in realizing robust speech recognition and reducing the computational complexities under both simulated environments and real-life conditions.

A Study on the Realization of Wireless Home Network System Using High-performance Speech Recognition in Variable Position (가변위치 고음성인식 기술을 이용한 무선 홈 네트워크 시스템 구현에 관한 연구)

  • Yoon, Jun-Chul;Choi, Sang-Bang;Park, Chan-Sub;Kim, Se-Yong;Kim, Ki-Man;Kang, Suk-Youb
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.4
    • /
    • pp.991-998
    • /
    • 2010
  • In realization of wireless home network system using speech recognition in indoor voice recognition environment, background noise and reverberation are two main causes of digression in voice recognition system. In this study, the home network system resistant to reverberation and background noise using voice section detection method based on spectral entropy in indoor recognition environment is to be realized. Spectral subtraction can reduce the effect of reverberation and remove noise independent from voice signal by eliminating signal distorted by reverberation in spectrum. For effective spectral subtraction, the correct separation of voice section and silent section should be accompanied and for this, improvement of performance needs to be done, applying to voice section detection method based on entropy. In this study, experimental and indoor environment testing is carried out to figure out command recognition rate in indoor recognition environment. The test result shows that command recognition rate improved in static environment and reverberant room condition, using voice section detection method based on spectral entropy.

Combining deep learning-based online beamforming with spectral subtraction for speech recognition in noisy environments (잡음 환경에서의 음성인식을 위한 온라인 빔포밍과 스펙트럼 감산의 결합)

  • Yoon, Sung-Wook;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.439-451
    • /
    • 2021
  • We propose a deep learning-based beamformer combined with spectral subtraction for continuous speech recognition operating in noisy environments. Conventional beamforming systems were mostly evaluated by using pre-segmented audio signals which were typically generated by mixing speech and noise continuously on a computer. However, since speech utterances are sparsely uttered along the time axis in real environments, conventional beamforming systems degrade in case when noise-only signals without speech are input. To alleviate this drawback, we combine online beamforming algorithm and spectral subtraction. We construct a Continuous Speech Enhancement (CSE) evaluation set to evaluate the online beamforming algorithm in noisy environments. The evaluation set is built by mixing sparsely-occurring speech utterances of the CHiME3 evaluation set and continuously-played CHiME3 background noise and background music of MUSDB. Using a Kaldi-based toolkit and Google web speech recognizer as a speech recognition back-end, we confirm that the proposed online beamforming algorithm with spectral subtraction shows better performance than the baseline online algorithm.

Speech enhancement method based on feature compensation gain for effective speech recognition in noisy environments (잡음 환경에 효과적인 음성인식을 위한 특징 보상 이득 기반의 음성 향상 기법)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.51-55
    • /
    • 2019
  • This paper proposes a speech enhancement method utilizing the feature compensation gain for robust speech recognition performances in noisy environments. In this paper we propose a speech enhancement method utilizing the feature compensation gain which is obtained from the PCGMM (Parallel Combined Gaussian Mixture Model)-based feature compensation method employing variational model composition. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms and our previous research over various background noise types and SNR (Signal to Noise Ratio) conditions in mismatched ASR (Automatic Speech Recognition) system condition. The computation complexity is significantly reduced by employing the noise model selection technique with maintaining the speech recognition performance at a similar level.