• Title/Summary/Keyword: Speech Recognition Technology

Search Result 527, Processing Time 0.023 seconds

Abrupt Noise Cancellation and Speech Restoration for Speech Enhancement (음질 개선을 위한 돌발잡음 제거와 음성복원)

  • Son BeakKwon;Hahn Minsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.101-104
    • /
    • 2003
  • In this paper, speech quality is improved by removing abrupt noise intervals and then substituting the gaps with estimates of the previous speech waveform. An abrupt noise detection signal has been proposed as a prediction error signal by utilizing LP coefficients of the previous frame. Abrupt noise intervals are estimated by using spectral energy. After removing estimated noise intervals, we applied several waveform substitution techniques such as zero substitution, previous frame repetition, pattern matching, and pitch waveform replication. To prove the validity of our algorithm, the LPC spectral distortion test and the recognition test are executed and, the results show that the speech quality is fairly well improved.

  • PDF

Development of a Foreign Language Speaking Training System Based on Speech Recognition Technology (음성 인식 테크놀로지 기반의 외국어 말하기 훈련 시스템 개발)

  • Koo, Dukhoi
    • Journal of The Korean Association of Information Education
    • /
    • v.23 no.5
    • /
    • pp.491-497
    • /
    • 2019
  • As the world develops into a global society, more and more people want to speak foreign languages fluently. To speak fluently, you must have sufficient training in speaking, which requires a dialogue partner. Recently, it is expected that the development of voice recognition information technology will enable the development of a system for conducting foreign language speaking training without human beings from the other party. In this study, a test bed system for foreign language speaking training was developed and applied to elementary school classes. Elementary school students were asked to present their English conversation situation and conduct speaking training. Then, satisfaction with the system and potential for continuous utilization were surveyed. The system developed in this study has been identified as helpful for the training of learning to speak a foreign language.

Design And Implementation of a Speech Recognition Interview Model based-on Opinion Mining Algorithm (오피니언 마이닝 알고리즘 기반 음성인식 인터뷰 모델의 설계 및 구현)

  • Kim, Kyu-Ho;Kim, Hee-Min;Lee, Ki-Young;Lim, Myung-Jae;Kim, Jeong-Lae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.225-230
    • /
    • 2012
  • The opinion mining is that to use the existing data mining technology also uploaded blog to web, to use product comment, the opinion mining can extract the author's opinion therefore it not judge text's subject, only judge subject's emotion. In this paper, published opinion mining algorithms and the text using speech recognition API for non-voice data to judge the emotions suggested. The system is open and the Subject associated with Google Voice Recognition API sunwihwa algorithm, the algorithm determines the polarity through improved design, based on this interview, speech recognition, which implements the model.

A Study on Cockpit Voice Command System for Fighter Aircraft (전투기용 음성명령 시스템에 대한 연구)

  • Kim, Seongwoo;Seo, Mingi;Oh, Yunghwan;Kim, Bonggyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.12
    • /
    • pp.1011-1017
    • /
    • 2013
  • The human voice is the most natural means of communication. The need for speech recognition technology is increasing gradually to increase the ease of human and machine interface. The function of the avionics equipment is getting various and complicated in consequence of the growth of digital technology development, so that the load of pilots in the fighter aircraft must become increased since they don't concentrate only the attack function, but also operate the complicated avionics equipments. Accordingly, if speech recognition technology is applied to the aircraft cockpit as regards the operating the avionics equipments, pilots can spend their time and effort on the mission of fighter aircraft. In this paper, the cockpit voice command system applicable to the fighter aircraft has been developed and the function and the performance of the system verified.

Categorization and Analysis of Error Types in the Korean Speech Recognition System (한국어 음성 인식 시스템의 오류 유형 분류 및 분석)

  • Son, Junyoung;Park Chanjun;Seo, Jaehyung;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.144-151
    • /
    • 2021
  • 딥러닝의 등장으로 자동 음성 인식 (Automatic Speech Recognition) 기술은 인간과 컴퓨터의 상호작용을 위한 가장 중요한 요소로 자리 잡았다. 그러나 아직까지 유사 발음 오류, 띄어쓰기 오류, 기호부착 오류 등과 같이 해결해야할 난제들이 많이 존재하며 오류 유형에 대한 명확한 기준 정립이 되고 있지 않은 실정이다. 이에 본 논문은 음성 인식 시스템의 오류 유형 분류 기준을 한국어에 특화되게 설계하였으며 이를 다양한 상용화 음성 인식 시스템을 바탕으로 질적 분석 및 오류 분류를 진행하였다. 실험의 경우 도메인과 어투에 따른 분석을 각각 진행하였으며 이를 통해 각 상용화 시스템별 강건한 부분과 약점인 부분을 파악할 수 있었다.

  • PDF

Crossword Game Using Speech Technology (음성기술을 이용한 십자말 게임)

  • Yu, Il-Soo;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.213-218
    • /
    • 2003
  • In this paper, we implement a crossword game, which operate by speech. The CAA (Cross Array Algorithm) produces the crossword array randomly and automatically using an domain-dictionary. For producing the crossword array, we construct seven domain-dictionaries. The crossword game is operated by a mouse and a keyboard and is also operated by speech. For the user interface by speech, we use a speech recognizer and a speech synthesizer and this provide more comfortable interface to the user. The efficiency evaluation of CAA is performed by estimating the processing times of producing the crossword array and the generation ratio of the crossword array. As the results of the CAA's efficiency evaluation, the processing times is about 10ms and the generation ratio of the crossword array is about 50%. Also, the recognition rates were 95.5%, 97.6% and 96.2% for the window sizes of "$7{\times}7$", "$9{\times}9$," and "$11{\times}11$" respectively.}11$" respectively.vely.

Electroencephalography-based imagined speech recognition using deep long short-term memory network

  • Agarwal, Prabhakar;Kumar, Sandeep
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.672-685
    • /
    • 2022
  • This article proposes a subject-independent application of brain-computer interfacing (BCI). A 32-channel Electroencephalography (EEG) device is used to measure imagined speech (SI) of four words (sos, stop, medicine, washroom) and one phrase (come-here) across 13 subjects. A deep long short-term memory (LSTM) network has been adopted to recognize the above signals in seven EEG frequency bands individually in nine major regions of the brain. The results show a maximum accuracy of 73.56% and a network prediction time (NPT) of 0.14 s which are superior to other state-of-the-art techniques in the literature. Our analysis reveals that the alpha band can recognize SI better than other EEG frequencies. To reinforce our findings, the above work has been compared by models based on the gated recurrent unit (GRU), convolutional neural network (CNN), and six conventional classifiers. The results show that the LSTM model has 46.86% more average accuracy in the alpha band and 74.54% less average NPT than CNN. The maximum accuracy of GRU was 8.34% less than the LSTM network. Deep networks performed better than traditional classifiers.

Design and Implementation of Context-aware Application on Smartphone Using Speech Recognizer

  • Kim, Kyuseok
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.49-59
    • /
    • 2020
  • As technologies have been developing, our lives are getting easier. Today we are surrounded by the new technologies such as AI and IoT. Moreover, the word, "smart" is a very broad one because we are trying to change our daily environment into smart one by using those technologies. For example, the traditional workplaces have changed into smart offices. Since the 3rd industrial revolution, we have used the touch interface to operate the machines. In the 4th industrial revolution, however, we are trying adding the speech recognition module to the machines to operate them by giving voice commands. Today many of the things are communicated with human by voice commands. Many of them are called AI things and they do tasks which users request and do tasks more than what users request. In the 4th industrial revolution, we use smartphones all the time every day from the morning to the night. For this reason, the privacy using phone is not guaranteed sometimes. For example, the caller's voice can be heard through the phone speaker when accepting a call. So, it is needed to protect privacy on smartphone and it should work automatically according to the user context. In this aspect, this paper proposes a method to adjust the voice volume for call to protect privacy on smartphone according to the user context.

The Korean Word Length Effect on Auditory Word Recognition (청각 단어 재인에서 나타난 한국어 단어길이 효과)

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

Speaker Recognition using PCA in Driving Car Environments (PCA를 이용한 자동차 주행 환경에서의 화자인식)

  • Yu, Ha-Jin
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.103-106
    • /
    • 2005
  • The goal of our research is to build a text independent speaker recognition system that can be used in any condition without any additional adaptation process. The performance of speaker recognition systems can be severally degraded in some unknown mismatched microphone and noise conditions. In this paper, we show that PCA(Principal component analysis) without dimension reduction can greatly increase the performance to a level close to matched condition. The error rate is reduced more by the proposed augmented PCA, which augment an axis to the feature vectors of the most confusable pairs of speakers before PCA

  • PDF