• Title/Summary/Keyword: 텍스트/음성변환

Search Result 75, Processing Time 0.025 seconds

Automatic Dubbing System for Remote Personalized Basketball Feedback Video (원격 개인 농구 기술 피드백 영상 자동 더빙 시스템)

  • Jong-Uk Lim;Ray Kim;Young Yoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.466-467
    • /
    • 2024
  • 본 논문은 전문 스킬 트레이너들의 개인 농구 기술 분석 및 피드백 영상에 더빙을 자동으로 적용하는 시스템을 제안한다. 이 시스템은 농구 용어집 기반 번역, 음성-텍스트 변환 모델 간의 비교 분석, 영상과 더빙 트랙 동기화 알고리즘을 통해 다양한 언어로의 신속한 자동 번역과 더빙을 가능하게 함으로써 선수와 코치 간의 언어 장벽 없는 소통을 지원한다. 본 연구는 자동 더빙 기술에 힘입어 원격 농구 교육 효율성과 질의 재고 및 저변 확산에 기여하고자 한다.

Deep learning-based speech recognition for Korean elderly speech data including dementia patients (치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식)

  • Jeonghyeon Mun;Joonseo Kang;Kiwoong Kim;Jongbin Bae;Hyeonjun Lee;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.33-48
    • /
    • 2023
  • In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.

UA Tree-based Reduction of Speech DB in a Large Corpus-based Korean TTS (대용량 한국어 TTS의 결정트리기반 음성 DB 감축 방안)

  • Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.91-98
    • /
    • 2010
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. Because the improvements in the natualness, personality, speaking style, emotions of synthetic speech need the increase of the size of speech DB, it is necessary to prune the redundant speech segments in a large speech segment DB. In this paper, we propose a new method to construct a segmental speech DB for the Korean TTS system based on a clustering algorithm to downsize the segmental speech DB. For the performance test, the synthetic speech was generated using the Korean TTS system which consists of the language processing module, prosody processing module, segment selection module, speech concatenation module, and segmental speech DB. And MOS test was executed with the a set of synthetic speech generated with 4 different segmental speech DBs. We constructed 4 different segmental speech DB by combining CM1(or CM2) tree clustering method and full DB (or reduced DB). Experimental results show that the proposed method can reduce the size of speech DB by 23% and get high MOS in the perception test. Therefore the proposed method can be applied to make a small sized TTS.

A Design of the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomous vehicle safety (자율차량 안전을 위한 긴급상황 알림 및 운전자 반응 확인 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.134-139
    • /
    • 2021
  • Currently, the autonomous vehicle market is commercializing a level 3 autonomous vehicle, but it still requires the attention of the driver. After the level 3 autonomous driving, the most notable aspect of level 4 autonomous vehicles is vehicle stability. This is because, unlike Level 3, autonomous vehicles after level 4 must perform autonomous driving, including the driver's carelessness. Therefore, in this paper, we propose the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomousvehicle safety that notifies the driver of an emergency situation and recognizes the driver's reaction in a situation where the driver is careless. The EDCS uses the emergency situation delivery module to make the emergency situation to text and transmits it to the driver by voice, and the driver response confirmation module recognizes the driver's reaction to the emergency situation and gives the driver permission Decide whether to pass. As a result of the experiment, the HMM of the emergency delivery module learned speech at 25% faster than RNN and 42.86% faster than LSTM. The Tacotron2 of the driver's response confirmation module converted text to speech about 20ms faster than deep voice and 50ms faster than deep mind. Therefore, the emergency notification and driver response confirmation system can efficiently learn the neural network model and check the driver's response in real time.

Ship s Maneuvering and Winch Control System with Voice Instruction Based Learning (음성지시에 의한 선박 조종 및 윈치 제어 시스템)

  • Seo, Ki-Yeol;Park, Gyei-Kark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.517-523
    • /
    • 2002
  • In this paper, we propose system that apply VIBL method to add speech recognition to LIBL method based on human s studying method to use natural language to steering system of ship, MERCS and winch appliances and use VIBL method to alternate process that linguistic instruction such as officer s steering instruction is achieved via ableman and control steering gear, MERCS and winch appliances. By specific method of study, ableman s suitable steering manufacturing model embodies intelligent steering gear controlling system that embody and language direction base studying method to present proper meaning element and evaluation rule to steering system of ship apply and respond more efficiently on voice instruction of commander using fuzzy inference rule. Also we embody system that recognize voice direction of commander and control MERCS and winch appliances. We embodied steering manufacturing model based on ableman s experience and presented rudder angle for intelligent steering system, compass bearing arrival time, evaluation rule to propose meaning element of stationary state and correct steerman manufacturing model rule using technique to recognize voice instruction of commander and change to text and fuzzy inference. Also we apply VIBL method to speech recognition ship control simulator and confirmed the effectiveness.

Multifaceted Evaluation Methodology for AI Interview Candidates - Integration of Facial Recognition, Voice Analysis, and Natural Language Processing (AI면접 대상자에 대한 다면적 평가방법론 -얼굴인식, 음성분석, 자연어처리 영역의 융합)

  • Hyunwook Ji;Sangjin Lee;Seongmin Mun;Jaeyeol Lee;Dongeun Lee;kyusang Lim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.55-58
    • /
    • 2024
  • 최근 각 기업의 AI 면접시스템 도입이 증가하고 있으며, AI 면접에 대한 실효성 논란 또한 많은 상황이다. 본 논문에서는 AI 면접 과정에서 지원자를 평가하는 방식을 시각, 음성, 자연어처리 3영역에서 구현함으로써, 면접 지원자를 다방면으로 분석 방법론의 적절성에 대해 평가하고자 한다. 첫째, 시각적 측면에서, 면접 지원자의 감정을 인식하기 위해, 합성곱 신경망(CNN) 기법을 활용해, 지원자 얼굴에서 6가지 감정을 인식했으며, 지원자가 카메라를 응시하고 있는지를 시계열로 도출하였다. 이를 통해 지원자가 면접에 임하는 태도와 특히 얼굴에서 드러나는 감정을 분석하는 데 주력했다. 둘째, 시각적 효과만으로 면접자의 태도를 파악하는 데 한계가 있기 때문에, 지원자 음성을 주파수로 환산해 특성을 추출하고, Bidirectional LSTM을 활용해 훈련해 지원자 음성에 따른 6가지 감정을 추출했다. 셋째, 지원자의 발언 내용과 관련해 맥락적 의미를 파악해 지원자의 상태를 파악하기 위해, 음성을 STT(Speech-to-Text) 기법을 이용하여 텍스트로 변환하고, 사용 단어의 빈도를 분석하여 지원자의 언어 습관을 파악했다. 이와 함께, 지원자의 발언 내용에 대한 감정 분석을 위해 KoBERT 모델을 적용했으며, 지원자의 성격, 태도, 직무에 대한 이해도를 파악하기 위해 객관적인 평가지표를 제작하여 적용했다. 논문의 분석 결과 AI 면접의 다면적 평가시스템의 적절성과 관련해, 시각화 부분에서는 상당 부분 정확도가 객관적으로 입증되었다고 판단된다. 음성에서 감정분석 분야는 면접자가 제한된 시간에 모든 유형의 감정을 드러내지 않고, 또 유사한 톤의 말이 진행되다 보니 특정 감정을 나타내는 주파수가 다소 집중되는 현상이 나타났다. 마지막으로 자연어처리 영역은 면접자의 발언에서 나오는 말투, 특정 단어의 빈도수를 넘어, 전체적인 맥락과 느낌을 이해할 수 있는 자연어처리 분석모델의 필요성이 더욱 커졌음을 판단했다.

  • PDF

A Study on Quantitative Evaluation Method for STT Engine Accuracy based on Korean Characteristics (한국어 특성 기반의 STT 엔진 정확도를 위한 정량적 평가방법 연구)

  • Min, So-Yeon;Lee, Kwang-Hyong;Lee, Dong-Seon;Ryu, Dong-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.699-707
    • /
    • 2020
  • With the development of deep learning technology, voice processing-related technology is applied to various areas, such as STT (Speech To Text), TTS (Text To Speech), ChatBOT, and intelligent personal assistant. In particular, the STT is a voice-based, relevant service that changes human languages to text, so it can be applied to various IT related services. Recently, many places, such as general private enterprises and public institutions, are attempting to introduce the relevant technology. On the other hand, in contrast to the general IT solution that can be evaluated quantitatively, the standard and methods of evaluating the accuracy of the STT engine are ambiguous, and they do not consider the characteristics of the Korean language. Therefore, it is difficult to apply the quantitative evaluation standard. This study aims to provide a guide to an evaluation of the STT engine conversion performance based on the characteristics of the Korean language, so that engine manufacturers can perform the STT conversion based on the characteristics of the Korean language, while the market could perform a more accurate evaluation. In the experiment, a 35% more accurate evaluation could be performed compared to the existing methods.

Prototype Design and Development of Online Recruitment System Based on Social Media and Video Interview Analysis (소셜미디어 및 면접 영상 분석 기반 온라인 채용지원시스템 프로토타입 설계 및 구현)

  • Cho, Jinhyung;Kang, Hwansoo;Yoo, Woochang;Park, Kyutae
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.203-209
    • /
    • 2021
  • In this study, a prototype design model was proposed for developing an online recruitment system through multi-dimensional data crawling and social media analysis, and validates text information and video interview in job application process. This study includes a comparative analysis process through text mining to verify the authenticity of job application paperwork and to effectively hire and allocate workers based on the potential job capability. Based on the prototype system, we conducted performance tests and analyzed the result for key performance indicators such as text mining accuracy and interview STT(speech to text) function recognition rate. If commercialized based on design specifications and prototype development results derived from this study, it may be expected to be utilized as the intelligent online recruitment system technology required in the public and private recruitment markets in the future.

Object Detection Algorithm for Explaining Products to the Visually Impaired (시각장애인에게 상품을 안내하기 위한 객체 식별 알고리즘)

  • Park, Dong-Yeon;Lim, Soon-Bum
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.10
    • /
    • pp.1-10
    • /
    • 2022
  • Visually impaired people have very difficulty using retail stores due to the absence of braille information on products and any other support system. In this paper, we propose a basic algorithm for a system that recognizes products in retail stores and explains them as a voice. First, the deep learning model detects hand objects and product objects in the input image. Then, it finds a product object that most overlapping hand object by comparing the coordinate information of each detected object. We determine that this is a product selected by the user, and the system read the nutritional information of the product as Text-To-Speech. As a result of the evaluation, we confirmed a high performance of the learning model. The proposed algorithm can be actively used to build a system that supports the use of retail stores for the visually impaired.

A Design and Implementation of Speech Recognition and Synthetic Application for Hearing-Impairment

  • Kim, Woo-Lin;Ham, Hye-Won;Yun, Sang-Un;Lee, Won Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.105-110
    • /
    • 2021
  • In this paper, we design and implement an Android mobile application that helps hearing impaired people communicate based on STT(Speech-to-Text) and TTS(Text-to-Speech) APIs and accelerometer sensor of a smartphone. This application provides the ability to record what the hearing-Impairment person's interlocutor is saying with a microphone, convert it to text using the STT API, and display it to the hearing-Impairment person. In addition. In addition, when a hearing-impaired person inputs a text using the TTS API, it is converted into voice and told to the interlocutor. When a hearing-impaired person shakes their smartphone, an accelerometer based background service function is provided to run the application. The application implemented in this paper provides a function that allows hearing impaired people to communicate easily with other people when communicating with others without using sign language as a video call.