• Title/Summary/Keyword: Speech Recognition Technology

Search Result 527, Processing Time 0.026 seconds

SVM-based Utterance Verification Using Various Confidence Measures (다양한 신뢰도 척도를 이용한 SVM 기반 발화검증 연구)

  • Kwon, Suk-Bong;Kim, Hoi-Rin;Kang, Jeom-Ja;Koo, Myong-Wan;Ryu, Chang-Sun
    • MALSORI
    • /
    • no.60
    • /
    • pp.165-180
    • /
    • 2006
  • In this paper, we present several confidence measures (CM) for speech recognition systems to evaluate the reliability of recognition results. We propose heuristic CMs such as mean log-likelihood score, N-best word log-likelihood ratio, likelihood sequence fluctuation and likelihood ratio testing(LRT)-based CMs using several types of anti-models. Furthermore, we propose new algorithms to add weighting terms on phone-level log-likelihood ratio to merge word-level log-likelihood ratios. These weighting terms are computed from the distance between acoustic models and knowledge-based phoneme classifications. LRT-based CMs show better performance than heuristic CMs excessively, and LRT-based CMs using phonetic information show that the relative reduction in equal error rate ranges between $8{\sim}13%$ compared to the baseline LRT-based CMs. We use the support vector machine to fuse several CMs and improve the performance of utterance verification. From our experiments, we know that selection of CMs with low correlation is more effective than CMs with high correlation.

  • PDF

A Study on the Performance of Companding Algorithms for Digital Hearing Aid Users (디지털 보청기 사용자를 위한 압신 알고리즘의 성능 연구)

  • Hwang, Y.S.;Han, J.H.;Ji, Y.S.;Hong, S.H.;Lee, S.M.;Kim, D.W.;Kim, In-Young;Kim, Sun-I.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.218-229
    • /
    • 2011
  • Companding algorithms have been used to enhance speech recognition in noise for cochlea implant users. The efficiency of using companding for digital hearing aid users is not yet validated. The purpose of this study is to evaluate the performance of the companding for digital hearing aid users in the various hearing loss cases. Using HeLPS, a hearing loss simulator, two different sensorinerual hearing loss conditions were simulated; mild gently sloping hearing loss(HL1) and moderate to steeply sloping hearing loss(HL2). In addition, a non-linear compression was simulated to compensate for hearing loss using national acoustic laboratories-non-linear version 1(NAL-NL1) in HeLPS. In companding, the following four different companding strategies were used changing Q values(q1, q2) of pre-filter(F filter) and post filter(G filter). Firstly, five IEEE sentences which were presented with speech-shaped noise at different SNRs(0, 5, 10, 15 dB) were processed by the companding. Secondly, the processed signals were applied to HeLPS. For comparison, signals which were not processed by companding were also applied to HeLPS. For the processed signals, log-likelihood ratio(LLR) and cepstral distance(CEP) were measured for evaluation of speech quality. Also, fourteen normal hearing listeners performed speech reception threshold(SRT) test for evaluation of speech intelligibility. As a result of this study, the processed signals with the companding and NAL-NL1 have performed better than that with only NAL-NL1 in the sensorineural hearing loss conditions. Moreover, the higher ratio of Q values showed better scores in LLR and CEP. In the SRT test, the processed signals with companding(SRT = -13.33 dB SPL) showed significantly better speech perception in noise than those processed using only NAL-NL1(SRT = -11.56 dB SPL).

Deep recurrent neural networks with word embeddings for Urdu named entity recognition

  • Khan, Wahab;Daud, Ali;Alotaibi, Fahd;Aljohani, Naif;Arafat, Sachi
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.90-100
    • /
    • 2020
  • Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state-of-the-art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short-term memory and back propagation through time approaches. The proposed models consider both language-dependent features, such as part-of-speech tags, and language-independent features, such as the "context windows" of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f-measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively.

Intelligent Retrieval System with Interactive Voice Support (대화형 음성 지원을 통한 지능형 검색 시스템)

  • Moon, K.J.;Yoo, Y.S.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.1
    • /
    • pp.29-35
    • /
    • 2015
  • In this paper, we propose a intelligent retrieval system with interactive voice support. The developed system helps to find misrecognized words by using the relationship between lexical items in a sentence recognition and present the correct vocabulary. In this study, we implement a simulation system that can be proposed to determine the usefulness of the product search assistance system which offers applications. Experimental results were confirmed to correct the wrong speech recognition vocabulary in a simple user interface to help the product search.

  • PDF

Neuroanatomical analysis for onomatopoeia : fMRI study

  • Han, Jong-Hye;Choi, Won-Il;Chang, Yong-Min;Jeong, Ok-Ran;Nam, Ki-Chun
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.315-318
    • /
    • 2004
  • The purpose of this study is to examine the neuroanatomical areas related with onomatopoeia (sound-imitated word). Using the block-designed fMRI, whole-brain images (N=11) were acquired during lexical decisions. We examined how the lexical information initiates brain activation during visual word recognition. The onomatopoeic word recognition activated the bilateral occipital lobes and superior mid-temporal gyrus.

  • PDF

AI Advisor for Response of Disaster Safety in Risk Society (위험사회 재난 안전 분야 대응을 위한 AI 조력자)

  • Lee, Yong-Hak;Kang, Yunhee;Lee, Min-Ho;Park, Seong-Ho;Kang, Myung-Ju
    • Journal of Platform Technology
    • /
    • v.8 no.3
    • /
    • pp.22-29
    • /
    • 2020
  • The 4th industrial revolution is progressing by country as a mega trend that leads various technological convergence directions in the social and economic fields from the initial simple manufacturing innovation. The epidemic of infectious diseases such as COVID-19 is shifting digital-centered non-face-to-face business from economic operation, and the use of AI and big data technology for personalized services is essential to spread online. In this paper, we analyze cases focusing on the application of artificial intelligence technology, which is a key technology for the effective implementation of the digital new deal promoted by the government, as well as the major technological characteristics of the 4th industrial revolution and describe the use cases in the field of disaster response. As a disaster response use case, AI assistants suggest appropriate countermeasures according to the status of the reporter in an emergency call. To this end, AI assistants provide speech recognition data-based analysis and disaster classification of converted text for adaptive response.

  • PDF

Design and Implementation of Voice-based Interactive Service KIOSK (음성기반 대화형 서비스 키오스크 설계 및 구현)

  • Kim, Sang-woo;Choi, Dae-june;Song, Yun-Mi;Moon, Il-Young
    • Journal of Practical Engineering Education
    • /
    • v.14 no.1
    • /
    • pp.99-108
    • /
    • 2022
  • As the demand for kiosks increases, more users complain of discomfort. Accordingly, a kiosk that enables easy menu selection and order by producing a voice-based interactive service is produced and provided in the form of a web. It implements voice functions based on the Annyang API and SpeechSynthesis API, and understands the user's intention through Dialogflow. And discuss how to implement this process based on Rest API. In addition, the recommendation system is applied based on collaborative filtering to improve the low consumer accessibility of existing kiosks, and to prevent infection caused by droplets during the use of voice recognition services, it provides the ability to check the wearing of masks before using the service.

A Driving Information Centric Information Processing Technology Development Based on Image Processing (영상처리 기반의 운전자 중심 정보처리 기술 개발)

  • Yang, Seung-Hoon;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.31-37
    • /
    • 2012
  • Today, the core technology of an automobile is becoming to IT-based convergence system technology. To cope with many kinds of situations and provide the convenience for drivers, various IT technologies are being integrated into automobile system. In this paper, we propose an convergence system, which is called Augmented Driving System (ADS), to provide high safety and convenience of drivers based on image information processing. From imaging sensor, the image data is acquisited and processed to give distance from the front car, lane, and traffic sign panel by the proposed methods. Also, a converged interface technology with camera for gesture recognition and microphone for speech recognition is provided. Based on this kind of system technology, car accident will be decreased although drivers could not recognize the dangerous situations, since the system can recognize situation or user context to give attention to the front view. Through the experiments, the proposed methods achieved over 90% of recognition in terms of traffic sign detection, lane detection, and distance measure from the front car.

Elderly Speech Signal Processing: A Systematic Review for Analysis of Gender Innovation (노인음성신호처리: 젠더혁신 분석에 대한 체계적 문헌고찰)

  • Lee, JiYeoun
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.8
    • /
    • pp.148-154
    • /
    • 2019
  • The purpose of this study is to review systematically the literatures on the research of the elderly speech signal processing based on the domestic gender innovation and to introduce the utility and innovation of the gender analysis methods. From 2000 to present, among the 37 research papers published in the Korean Journal, 25 papers were selected according to the inclusion and exclusion criteria. And gender analysis methods were applied to gender research subject and design. Research results show diversity of research field and high gender recognition of R & D team are needed in research and development of engineering of gender innovation perspective. In addition, government-level regulation and research funding should be systematically applied to gender innovation research processes in the elderly voice signal processing and various gender innovation projects. In the future gender innovation in the elderly speech signal processing can contribute to the creation of a new market by developing a voice recognition system and service that reflects the needs of both men and women.

Speaker Adaptation using ICA-based Feature Transformation (ICA 기반의 특징변환을 이용한 화자적응)

  • Park ManSoo;Kim Hoi-Rin
    • MALSORI
    • /
    • no.43
    • /
    • pp.127-136
    • /
    • 2002
  • The speaker adaptation technique is generally used to reduce the speaker difference in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the transformation matrix is learned from a speaker independent training data. When the amount of data is small, however, it is necessary to adjust the ICA-based transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method: through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. We observed that the proposed technique is effective to adaptation performance.

  • PDF