• Title/Summary/Keyword: STT(Speech-to-Text)

Search Result 41, Processing Time 0.023 seconds

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.

Grammatical Quality Estimation for Error Correction in Automatic Speech Recognition (문법성 품질 예측에 기반한 음성 인식 오류 교정)

  • Mintaek Seo;Seung-Hoon Na;Minsoo Na;Maengsik Choi;Chunghee Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.608-612
    • /
    • 2022
  • 딥러닝의 발전 이후, 다양한 분야에서는 딥러닝을 이용해 이전에 어려웠던 작업들을 해결하여 사용자에게 편의성을 제공하고 있다. 하지만 아직 딥러닝을 통해 이상적인 서비스를 제공하는 데는 어려움이 있다. 특히, 음성 인식 작업에서 음성 양식에서 이용 방안에 대하여 다양성을 제공해주는 음성을 텍스트로 전환하는 Speech-To-Text(STT)은 문장 결과가 이상치에 달하지 못해 오류가 나타나게 된다. 본 논문에서는 STT 결과 보정을 문법 교정으로 치환하여 종단에서 올바른 토큰들을 조합하여 성능 향상을 하기 위해 각 토큰 별 품질 평가를 진행하는 모델을 한국어에서 적용하고 성능의 향상을 확인한다.

  • PDF

Voice Recognition Speech Correction Application Using Big Data Analysis (빅데이터 분석을 활용한 음성 인식 스피치 교정 애플리케이션)

  • Kim, Han-Kyeol;Kim, Do-Woo;Lim, Sae-Myung;Hong, Du-Pyo
    • Annual Conference of KIPS
    • /
    • 2019.10a
    • /
    • pp.533-535
    • /
    • 2019
  • 최근 청년 실업률의 증가에 따른 취업 경쟁이 날이 갈수록 심해지고 있다. 채용과정에서 면접의 비중을 높이는 기업도 갈수록 증가하고 있다. 또한 대기업에서는 면접의 객관성을 확보하기 위해 AI 면접을 도입했다. 이러한 면접의 도입으로 인해 취업 준비생들의 면접 준비에 드는 비용 부담이 증가하였다. 최근 AI분야에서 음성 인식과 자연어 처리에 대한 개발이 활발히 이루어지고 있다. 본 논문은 녹음된 면접 음성을 음성 인식 기술 중 STT(Speech To Text) 와 TTS(Text To Speech)를 활용하여 면접의 음성을 문자로, 면접 질문의 문장을 음성으로 변환한다. 또한 자연어 처리 및 감성어 사전(KNU)을 활용하여 면접 문장의 형태소 분석하고 긍정 및 부정 단어별 정보를 시각화 하여 나타낼 수 있게 구현하였다.

A Design and Implementation of Online Exhibition Application for Disabled Artists

  • Seung Gyeom Kim;Ha Ram Kang;Tae Hun Kim;Jun Hyeok Lee;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.77-84
    • /
    • 2024
  • In this paper, we design and implement an online exhibition application based on an Android platform that can showcase the artistic works of disabled artists. This application considers user convenience for disabled artists, particularly providing STT(Speech-to-Text) and TTS(Text-to-Speech) features for visually and hearing impaired individuals. Additionally, for the exhibition of works by disabled artists, the application implements disability certification during registration using disability certificates and registration numbers, ensuring that only authenticated disabled artists can exhibit their works. The database storing personal information of disabled artists and information about art pieces is implemented using MySQL. The server module utilizes RestAPI to transmit data in Json format. To address the large data size of art piece information, it is stored using Firebase Storage, eliminating data size limitations on the server. This application can alleviate issues such as a lack of exhibition space for disabled artists and a lack of communication with the general public.

A Method of Automated Quality Evaluation for Voice-Based Consultation (음성 기반 상담의 품질 평가를 위한 자동화 기법)

  • Lee, Keonsoo;Kim, Jung-Yeon
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.69-75
    • /
    • 2021
  • In a contact-free society, online services are becoming more important than classic offline services. At the same time, the role of a contact center, which executes customer relation management (CRM), is increasingly essential. For supporting the CRM tasks and their effectiveness, techniques of process automation need to be applied. Quality assurance (QA) is one of the time and resource consuming, and typical processes that are suitable for automation. In this paper, a method of automatic quality evaluation for voice based consultations is proposed. Firstly, the speech in consultations is transformed into a text by speech recognition. Then quantitative evaluation based on the QA metrics, including checking the elements in opening and closing mention, the existence of asking the mandatory information, the attitude of listening and speaking, is executed. 92.7% of the automated evaluations are the same to the result done by human experts. It was found that the non matching cases of the automated evaluations were mainly caused from the mistranslated Speech-to-Text (STT) result. With the confidence of STT result, this proposed method can be employed for enhancing the efficiency of QA process in contact centers.

A Child Emotion Analysis System using Text Mining and Method for Constructing a Children's Emotion Dictionary (텍스트마이닝 기반 아동 감정 분석 시스템 및 아동용 감정 사전 구축 방안)

  • Young-Jun Park;Sun-Young Kim;Yo-Han Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.545-550
    • /
    • 2024
  • In a society undergoing rapid change, modern individuals are facing various stresses, and there's a noticeable increase in mental health treatments for children as well. For the psychological well-being of children, it's crucial to swiftly discern their emotional states. However, this proves challenging as young children often articulate their emotions using limited vocabulary. This paper aims to categorize children's psychological states into four emotions: depression, anxiety, loneliness, and aggression. We propose a method for constructing an emotion dictionary tailored for children based on assessments from child psychology experts.

Designing Voice Interface for The Disabled (장애인을 위한 음성 인터페이스 설계)

  • Choi, Dong-Wook;Lee, Ji-Hoon;Moon, Nammee
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.697-699
    • /
    • 2019
  • IT 기술의 발달에 따라 전자기기의 이용량은 증가하였지만, 시각장애인들이나 지체 장애인들이 이용하는 데에 어려움이 있다. 따라서 본 논문에서는 Google Cloud API를 활용하여 음성으로 프로그램을 제어할 수 있는 음성 인터페이스를 제안한다. Google Cloud에서 제공하는 STT(Speech To Text)와 TTS(Text To Speech) API를 이용하여 사용자의 음성을 인식하면 텍스트로 변환된 음성이 시스템을 통해 응용 프로그램을 제어할 수 있도록 설계한다. 이 시스템은 장애인들이 전자기기를 사용하는데 많은 편리함을 줄 것으로 예상하며 나아가 장애인들뿐 아니라 비장애인들도 활용 가능할 것으로 기대한다.

Primary Study for dialogue based on Ordering Chatbot

  • Kim, Ji-Ho;Park, JongWon;Moon, Ji-Bum;Lee, Yulim;Yoon, Andy Kyung-yong
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.209-214
    • /
    • 2018
  • Today is the era of artificial intelligence. With the development of artificial intelligence, machines have begun to impersonate various human characteristics today. Chatbot is one instance of this interactive artificial intelligence. Chatbot is a computer program that enables to conduct natural conversations with people. As mentioned above, Chatbot conducted conversations in text, but Chatbot, in this study evolves to perform commands based on speech-recognition. In order for Chatbot to perfectly emulate a human dialogue, it is necessary to analyze the sentence correctly and extract appropriate response. To accomplish this, the sentence is classified into three types: objects, actions, and preferences. This study shows how objects is analyzed and processed, and also demonstrates the possibility of evolving from an elementary model to an advanced intelligent system. By this study, it will be evaluated that speech-recognition based Chatbot have improved order-processing time efficiency compared to text based Chatbot. Once this study is done, speech-recognition based Chatbot have the potential to automate customer service and reduce human effort.

A Design and Implementation of The Deep Learning-Based Senior Care Service Application Using AI Speaker

  • Mun Seop Yun;Sang Hyuk Yoon;Ki Won Lee;Se Hoon Kim;Min Woo Lee;Ho-Young Kwak;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.23-30
    • /
    • 2024
  • In this paper, we propose a deep learning-based personalized senior care service application. The proposed application uses Speech to Text technology to convert the user's speech into text and uses it as input to Autogen, an interactive multi-agent large-scale language model developed by Microsoft, for user convenience. Autogen uses data from previous conversations between the senior and ChatBot to understand the other user's intent and respond to the response, and then uses a back-end agent to create a wish list, a shared calendar, and a greeting message with the other user's voice through a deep learning model for voice cloning. Additionally, the application can perform home IoT services with SKT's AI speaker (NUGU). The proposed application is expected to contribute to future AI-based senior care technology.

AI-based stuttering automatic classification method: Using a convolutional neural network (인공지능 기반의 말더듬 자동분류 방법: 합성곱신경망(CNN) 활용)

  • Jin Park;Chang Gyun Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.71-80
    • /
    • 2023
  • This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Korean speakers who stutter. To this aim, speech data were collected from 9 adults who stutter and 9 normally-fluent speakers. The data were automatically segmented at the phrasal level using Google Cloud speech-to-text (STT), and labels such as 'fluent', 'blockage', prolongation', and 'repetition' were assigned to them. Mel frequency cepstral coefficients (MFCCs) and the CNN-based classifier were also used for detecting and classifying each type of the stuttered disfluency. However, in the case of prolongation, five results were found and, therefore, excluded from the classifier model. Results showed that the accuracy of the CNN classifier was 0.96, and the F1-score for classification performance was as follows: 'fluent' 1.00, 'blockage' 0.67, and 'repetition' 0.74. Although the effectiveness of the automatic classification identifier was validated using CNNs to detect the stuttered disfluencies, the performance was found to be inadequate especially for the blockage and prolongation types. Consequently, the establishment of a big speech database for collecting data based on the types of stuttered disfluencies was identified as a necessary foundation for improving classification performance.