• Title/Summary/Keyword: End-to-end speech recognition

Search Result 90, Processing Time 0.024 seconds

Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation (자질 보강과 양방향 LSTM-CNN-CRF 기반의 한국어 개체명 인식 모델)

  • Lee, DongYub;Yu, Wonhee;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.55-62
    • /
    • 2017
  • The Named Entity Recognition system is a system that recognizes words or phrases with object names such as personal name (PS), place name (LC), and group name (OG) in the document as corresponding object names. Traditional approaches to named entity recognition include statistical-based models that learn models based on hand-crafted features. Recently, it has been proposed to construct the qualities expressing the sentence using models such as deep-learning based Recurrent Neural Networks (RNN) and long-short term memory (LSTM) to solve the problem of sequence labeling. In this research, to improve the performance of the Korean named entity recognition system, we used a hand-crafted feature, part-of-speech tagging information, and pre-built lexicon information to augment features for representing sentence. Experimental results show that the proposed method improves the performance of Korean named entity recognition system. The results of this study are presented through github for future collaborative research with researchers studying Korean Natural Language Processing (NLP) and named entity recognition system.

Lip-Synch System Optimization Using Class Dependent SCHMM (클래스 종속 반연속 HMM을 이용한 립싱크 시스템 최적화)

  • Lee, Sung-Hee;Park, Jun-Ho;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.312-318
    • /
    • 2006
  • The conventional lip-synch system has a two-step process, speech segmentation and recognition. However, the difficulty of speech segmentation procedure and the inaccuracy of training data set due to the segmentation lead to a significant Performance degradation in the system. To cope with that, the connected vowel recognition method using Head-Body-Tail (HBT) model is proposed. The HBT model which is appropriate for handling relatively small sized vocabulary tasks reflects co-articulation effect efficiently. Moreover the 7 vowels are merged into 3 classes having similar lip shape while the system is optimized by employing a class dependent SCHMM structure. Additionally in both end sides of each word which has large variations, 8 components Gaussian mixture model is directly used to improve the ability of representation. Though the proposed method reveals similar performance with respect to the CHMM based on the HBT structure. the number of parameters is reduced by 33.92%. This reduction makes it a computationally efficient method enabling real time operation.

Authentication Performance Optimization for Smart-phone based Multimodal Biometrics (스마트폰 환경의 인증 성능 최적화를 위한 다중 생체인식 융합 기법 연구)

  • Moon, Hyeon-Joon;Lee, Min-Hyung;Jeong, Kang-Hun
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.151-156
    • /
    • 2015
  • In this paper, we have proposed personal multimodal biometric authentication system based on face detection, recognition and speaker verification for smart-phone environment. Proposed system detect the face with Modified Census Transform algorithm then find the eye position in the face by using gabor filter and k-means algorithm. Perform preprocessing on the detected face and eye position, then we recognize with Linear Discriminant Analysis algorithm. Afterward in speaker verification process, we extract the feature from the end point of the speech data and Mel Frequency Cepstral Coefficient. We verified the speaker through Dynamic Time Warping algorithm because the speech feature changes in real-time. The proposed multimodal biometric system is to fuse the face and speech feature (to optimize the internal operation by integer representation) for smart-phone based real-time face detection, recognition and speaker verification. As mentioned the multimodal biometric system could form the reliable system by estimating the reasonable performance.

Noise Reduction using Spectral Subtraction in the Discrete Wavelet Transform Domain (이산 웨이브렛 변환영역에서의 스펙트럼 차감법을 이용한 잡음제거)

  • 김현기;이상운;홍재근
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.4
    • /
    • pp.306-315
    • /
    • 2001
  • In noise reduction method from noisy speech for speech recognition in noisy environments, conventional spectral subtraction method has a disadvantage which distinction of noise and speech is difficult, and characteristic of noise can't be estimated accurately. Also, noise reduction method in the wavelet transform domain has a disadvantage which loss of signal is generated in the high frequency domain. In order to compensate theme disadvantage, this paper propose spectral subtraction method in continuous wavelet transform domain which speech and non- speech intervals is distinguished by standard deviation of wavelet coefficient, and signal is divided three scales at different scale. The proposed method extract accurately characteristic of noise in order to apply spectral subtraction method by end detection and band division. The proposed method shows better performance than noise reduction method using conventional spectral subtraction and wavelet transform from viewpoint signal to noise ratio and Itakura-Saito distance by experimental.

  • PDF

A Study on the Educational Uses of Smart Speaker (스마트 스피커의 교육적 활용에 관한 연구)

  • Chang, Jiyeun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.33-39
    • /
    • 2019
  • Edutech, which combines education and information technology, is in the spotlight. Core technologies of the 4th Industrial Revolution have been actively used in education. Students use an AI-based learning platform to self-diagnose their needs. And get personalized training online with a cloud learning platform. Recently, a new educational medium called smart speaker that combines artificial intelligence technology and voice recognition technology has emerged and provides various educational services. The purpose of this study is to suggest a way to use smart speaker educationally to overcome the limitation of existing education. To this end, the concept and characteristics of smart speakers were analyzed, and the implications were derived by analyzing the contents provided by smart speakers. Also, the problem of using smart speaker was considered.

A Study on Verification of Back TranScription(BTS)-based Data Construction (Back TranScription(BTS)기반 데이터 구축 검증 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.109-117
    • /
    • 2021
  • Recently, the use of speech-based interfaces is increasing as a means for human-computer interaction (HCI). Accordingly, interest in post-processors for correcting errors in speech recognition results is also increasing. However, a lot of human-labor is required for data construction. in order to manufacture a sequence to sequence (S2S) based speech recognition post-processor. To this end, to alleviate the limitations of the existing construction methodology, a new data construction method called Back TranScription (BTS) was proposed. BTS refers to a technology that combines TTS and STT technology to create a pseudo parallel corpus. This methodology eliminates the role of a phonetic transcriptor and can automatically generate vast amounts of training data, saving the cost. This paper verified through experiments that data should be constructed in consideration of text style and domain rather than constructing data without any criteria by extending the existing BTS research.

AI Advisor for Response of Disaster Safety in Risk Society (위험사회 재난 안전 분야 대응을 위한 AI 조력자)

  • Lee, Yong-Hak;Kang, Yunhee;Lee, Min-Ho;Park, Seong-Ho;Kang, Myung-Ju
    • Journal of Platform Technology
    • /
    • v.8 no.3
    • /
    • pp.22-29
    • /
    • 2020
  • The 4th industrial revolution is progressing by country as a mega trend that leads various technological convergence directions in the social and economic fields from the initial simple manufacturing innovation. The epidemic of infectious diseases such as COVID-19 is shifting digital-centered non-face-to-face business from economic operation, and the use of AI and big data technology for personalized services is essential to spread online. In this paper, we analyze cases focusing on the application of artificial intelligence technology, which is a key technology for the effective implementation of the digital new deal promoted by the government, as well as the major technological characteristics of the 4th industrial revolution and describe the use cases in the field of disaster response. As a disaster response use case, AI assistants suggest appropriate countermeasures according to the status of the reporter in an emergency call. To this end, AI assistants provide speech recognition data-based analysis and disaster classification of converted text for adaptive response.

  • PDF

Crack Detection of Rotating Blade using Hidden Markov Model (회전 블레이드의 크랙 발생 예측을 위한 은닉 마르코프모델을 이용한 해석)

  • Lee, Seung-Kyu;Yoo, Hong-Hee
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2009.10a
    • /
    • pp.99-105
    • /
    • 2009
  • Crack detection method of a rotating blade was suggested in this paper. A rotating blade was modeled with a cantilever beam connected to a hub undergoing rotating motion. The existence and the location of crack were able to be recognized from the vertical response of end tip of a rotating cantilever beam by employing Discrete Hidden Markov Model (DHMM) and Empirical Mode Decomposition (EMD). DHMM is a famous stochastic method in the field of speech recognition. However, in recent researches, it has been proved that DHMM can also be used in machine health monitoring. EMD is the method suggested by Huang et al. that decompose a random signal into several mono component signals. EMD was used in this paper as the process of extraction of feature vectors which is the important process to developing DHMM. It was found that developed DHMMs for crack detection of a rotating blade have shown good crack detection ability.

  • PDF

Research on Emotional Factors and Voice Trend by Country to be considered in Designing AI's Voice - An analysis of interview with experts in Finland and Norway (AI의 음성 디자인에서 고려해야 할 감성적 요소 및 국가별 음성 트랜드에 관한 연구 - 핀란드와 노르웨이의 전문가 인뎁스 인터뷰를 중심으로)

  • Namkung, Kiechan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.9
    • /
    • pp.91-97
    • /
    • 2020
  • Use of voice-based interfaces that can interact with users is increasing as AI technology develops. To date, however, most of the research on voice-based interfaces has been technical in nature, focused on areas such as improving the accuracy of speech recognition. Thus, the voice of most voice-based interfaces is uniform and does not provide users with differentiated sensibilities. The purpose of this study is to add a emotional factor suitable for the AI interface. To this end, we have derived emotional factors that should be considered in designing voice interface. In addition, we looked at voice trends that differed from country to country. For this study, we conducted interviews with voice industry experts from Finland and Norway, countries that use their own independent languages.

Development of a Web-based Presentation Attitude Correction Program Centered on Analyzing Facial Features of Videos through Coordinate Calculation (좌표계산을 통해 동영상의 안면 특징점 분석을 중심으로 한 웹 기반 발표 태도 교정 프로그램 개발)

  • Kwon, Kihyeon;An, Suho;Park, Chan Jung
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.10-21
    • /
    • 2022
  • In order to improve formal presentation attitudes such as presentation of job interviews and presentation of project results at the company, there are few automated methods other than observation by colleagues or professors. In previous studies, it was reported that the speaker's stable speech and gaze processing affect the delivery power in the presentation. Also, there are studies that show that proper feedback on one's presentation has the effect of increasing the presenter's ability to present. In this paper, considering the positive aspects of correction, we developed a program that intelligently corrects the wrong presentation habits and attitudes of college students through facial analysis of videos and analyzed the proposed program's performance. The proposed program was developed through web-based verification of the use of redundant words and facial recognition and textualization of the presentation contents. To this end, an artificial intelligence model for classification was developed, and after extracting the video object, facial feature points were recognized based on the coordinates. Then, using 4000 facial data, the performance of the algorithm in this paper was compared and analyzed with the case of facial recognition using a Teachable Machine. Use the program to help presenters by correcting their presentation attitude.