• Title/Summary/Keyword: Speech Processing

Search Result 950, Processing Time 0.028 seconds

FPGA Implementation of Speech Processor for Cochlear Implant (청각보철장치를 위한 어음 발췌기의 FPGA 구현)

  • Park, S.J.;Hong, M.S.;Shin, J.I.;Park, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.163-164
    • /
    • 1998
  • In this paper the digital speech processing part of cochlear implant for sensorineural disorderly patients is implemented and simulated. We implement the speech processing part by dividing into three small parts - Filterbank, Pitch Detect, and Bandmapping parts. With the result, we conclude digital speech processing algorithm is implemented in FPGA perfectly. This means that cochlear implant can be made very small size.

  • PDF

A Research on Speech Processing and Coding Strategy for Cochlear Implants (청각 장애인을 위한 음성 신호의 자극패턴 추출에 관한 연구)

  • Chae, D.;Byun, J.;Choi, D.;Baeck, S.;Park, S.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.11
    • /
    • pp.175-179
    • /
    • 1993
  • A Study on the speech processing and coding strategy for cochlear implants have been developed to create a speech signal processing system which extracts stimulus parameter including formants, pitch, amplitude information. In this study we have presented the method which extracts characteristic information of speech signal and adapt patients with hearing handicap.

  • PDF

PASS: A Parallel Speech Understanding System

  • Chung, Sang-Hwa
    • Journal of Electrical Engineering and information Science
    • /
    • v.1 no.1
    • /
    • pp.1-9
    • /
    • 1996
  • A key issue in spoken language processing has become the integration of speech understanding and natural language processing(NLP). This paper presents a parallel computational model for the integration of speech and NLP. The model adopts a hierarchically-structured knowledge base and memory-based parsing techniques. Processing is carried out by passing multiple markers in parallel through the knowledge base. Speech-specific problems such as insertion, deletion, and substitution have been analyzed and their parallel solutions are provided. The complete system has been implemented on the Semantic Network Array Processor(SNAP) and is operational. Results show an 80% sentence recognition rate for the Air Traffic Control domain. Moreover, a 15-fold speed-up can be obtained over an identical sequential implementation with an increasing speed advantage as the size of the knowledge base grows.

  • PDF

An Automatic Post-processing Method for Speech Recognition using CRFs and TBL (CRFs와 TBL을 이용한 자동화된 음성인식 후처리 방법)

  • Seon, Choong-Nyoung;Jeong, Hyoung-Il;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.9
    • /
    • pp.706-711
    • /
    • 2010
  • In the applications of a human speech interface, reducing the error rate in recognition is the one of the main research issues. Many previous studies attempted to correct errors using post-processing, which is dependent on a manually constructed corpus and correction patterns. We propose an automatically learnable post-processing method that is independent of the characteristics of both the domain and the speech recognizer. We divide the entire post-processing task into two steps: error detection and error correction. We consider the error detection step as a classification problem for which we apply the conditional random fields (CRFs) classifier. Furthermore, we apply transformation-based learning (TBL) to the error correction step. Our experimental results indicate that the proposed method corrects a speech recognizer's insertion, deletion, and substitution errors by 25.85%, 3.57%, and 7.42%, respectively.

Interactive System using Multiple Signal Processing (다중신호처리를 이용한 인터렉티브 시스템)

  • Kim, Sung-Ill;Yang, Hyo-Sik;Shin, Wee-Jae;Park, Nam-Chun;Oh, Se-Jin
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.282-285
    • /
    • 2005
  • This paper discusses the interactive system for smart home environments. In order to realize this, the main emphasis of the paper lies on the description of the multiple signal processing on the basis of the technologies such as fingerprint recognition, video signal processing, speech recognition and synthesis. For essential modules of the interactive system, we adopted the motion detector based on the changes of brightness in pixels as well as the fingerprint identification for adapting home environments to the inhabitants. In addition, the real-time speech recognizer based on the HM-Net(Hidden Markov Network) and the speech synthesis were incorporated into the overall system for interaction between user and system. In experimental evaluation, the results showed that the proposed system was easy to use because the system was able to give special services for specific users in smart home environments, even though the performance of the speech recognizer was not better than the simulation results owing to the noisy environments.

  • PDF

UA Tree-based Reduction of Speech DB in a Large Corpus-based Korean TTS (대용량 한국어 TTS의 결정트리기반 음성 DB 감축 방안)

  • Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.91-98
    • /
    • 2010
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. Because the improvements in the natualness, personality, speaking style, emotions of synthetic speech need the increase of the size of speech DB, it is necessary to prune the redundant speech segments in a large speech segment DB. In this paper, we propose a new method to construct a segmental speech DB for the Korean TTS system based on a clustering algorithm to downsize the segmental speech DB. For the performance test, the synthetic speech was generated using the Korean TTS system which consists of the language processing module, prosody processing module, segment selection module, speech concatenation module, and segmental speech DB. And MOS test was executed with the a set of synthetic speech generated with 4 different segmental speech DBs. We constructed 4 different segmental speech DB by combining CM1(or CM2) tree clustering method and full DB (or reduced DB). Experimental results show that the proposed method can reduce the size of speech DB by 23% and get high MOS in the perception test. Therefore the proposed method can be applied to make a small sized TTS.

Speech signal processing in the auditory system (청각 계통에서의 음성신호처리)

  • 이재혁;심재성;백승화;박상희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.680-683
    • /
    • 1987
  • The speech signal processing in the auditory system can be analysized based on two representations : Average discharge rate and Temporal discharge pattern. But the average discharge rate representation is restricted by the narrow dynamic range because of the rate saturation and the two tone suppression phenomena, and the temporal discharge pattern representation needs a sophisticate frequency analysis and synchrony measure. In this paper, a simple representation is proposed : using a model considering the interaction of Cochlear fluid-BM movement and a haircell model, the feature of speech signals (formant frequency and pitch of vowels) is easily estimated in the Average Synchronized Rate.

  • PDF

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.

Speech Enhancement Using Microphone Array with MMSE-STSA Estimator Based Post-Processing (MMSE-STSA 추정치에 기반한 후처리를 갖는 마이크로폰 배열을 이용한 음성 개선)

  • Kwon Hong Seok;Son Jong Mok;Bae Keun Sung
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.187-190
    • /
    • 2002
  • In this paper, a speech enhancement system using microphone array with MMSE-STSA (Minimum Mean Square Error-Short Time Spectral Amplitude) estimator based post-processing is proposed. Speech enhancement is first carried out by conventional delay-and-sum beamforming (DSB). A new MMSE-STSA estimator is then obtained by refining MMSE-STSA estimators from each microphone, which is applied to the output of conventional DSB to obtain additional speech enhancement. Computer simulation for white and pink noises show that the proposed system is superior to other approaches.

  • PDF

A Voice-Activated Dialing System with Distributed Speech Recognition in WiFi Environments (무선랜 환경에서의 분산 음성 인식을 이용한 음성 다이얼링 시스템)

  • Park Sung-Joon;Koo Myoung_wan
    • MALSORI
    • /
    • no.56
    • /
    • pp.135-145
    • /
    • 2005
  • In this paper, a WiFi phone system with distributed speech recognition is implemented. The WiFi phone with voice-activated dialing and its functions are explained. Features of the input speech are extracted and are sent to the interactive voice response (IVR) server according to the real-time transport protocol (RTP). Feature extraction is based on the European Telecommunication Standards Institute (ETSI) standard front-end, but is modified to reduce the processing time. The time for front-end processing on a WiFi phone is compared with that in a PC.

  • PDF