• Title/Summary/Keyword: 모델 발화

Search Result 205, Processing Time 0.023 seconds

Korean Lip Reading System Using MobileNet (MobileNet을 이용한 한국어 입모양 인식 시스템)

  • Won-Jong Lee;Joo-Ah Kim;Seo-Won Son;Dong Ho Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.211-213
    • /
    • 2022
  • Lip Reading(독순술(讀脣術)) 이란 입술의 움직임을 보고 상대방이 무슨 말을 하는지 알아내는 기술이다. 본 논문에서는 MBC, SBS 뉴스 클로징 영상에서 쓰이는 문장 10개를 데이터로 사용하고 CNN(Convolutional Neural Network) 아키텍처 중 모바일 기기에서 동작을 목표로 한 MobileNet을 모델로 이용하여 발화자의 입모양을 통해 문장 인식 연구를 진행한 결과를 제시한다. 본 연구는 MobileNet과 LSTM을 활용하여 한국어 입모양을 인식하는데 목적이 있다. 본 연구에서는 뉴스 클로징 영상을 프레임 단위로 잘라 실험 문장 10개를 수집하여 데이터셋(Dataset)을 만들고 발화한 입력 영상으로부터 입술 인식과 검출을 한 후, 전처리 과정을 수행한다. 이후 MobileNet과 LSTM을 이용하여 뉴스 클로징 문장을 발화하는 입모양을 학습 시킨 후 정확도를 알아보는 실험을 진행하였다.

  • PDF

Example-based Dialog System for English Conversation Tutoring (영어 회화 교육을 위한 예제 기반 대화 시스템)

  • Lee, Sung-Jin;Lee, Cheong-Jae;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.129-136
    • /
    • 2010
  • In this paper, we present an Example-based Dialogue System for English conversation tutoring. It aims to provide intelligent one-to-one English conversation tutoring instead of old fashioned language education with static multimedia materials. This system can understand poor expressions of students and it enables green hands to engage in a dialogue in spite of their poor linguistic ability, which gives students interesting motivation to learn a foreign language. And this system also has educational functionalities to improve the linguistic ability. To achieve these goals, we have developed a statistical natural language understanding module for understanding poor expressions and an example-based dialogue manager with high domain scalability and several effective tutoring methods.

Summarization of Korean Dialogues through Dialogue Restructuring (대화문 재구조화를 통한 한국어 대화문 요약)

  • Eun Hee Kim;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.77-85
    • /
    • 2023
  • After COVID-19, communication through online platforms has increased, leading to an accumulation of massive amounts of conversational text data. With the growing importance of summarizing this text data to extract meaningful information, there has been active research on deep learning-based abstractive summarization. However, conversational data, compared to structured texts like news articles, often contains missing or transformed information, necessitating consideration from multiple perspectives due to its unique characteristics. In particular, vocabulary omissions and unrelated expressions in the conversation can hinder effective summarization. Therefore, in this study, we restructured by considering the characteristics of Korean conversational data, fine-tuning a pre-trained text summarization model based on KoBART, and improved conversation data summary perfomance through a refining operation to remove redundant elements from the summary. By restructuring the sentences based on the order of utterances and extracting a central speaker, we combined methods to restructure the conversation around them. As a result, there was about a 4 point improvement in the Rouge-1 score. This study has demonstrated the significance of our conversation restructuring approach, which considers the characteristics of dialogue, in enhancing Korean conversation summarization performance.

A Domain Selection for Multi-Domain Dialog System (멀티 도메인 대화시스템을 위한 도메인 결정 기술)

  • Lee, Injae;Kim, Kyungduk;Kim, Seokhwan;Lee, Donghyeon;Choi, Junwhi;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.133-135
    • /
    • 2011
  • 본 논문은 다중 도메인 대화 시스템에서 사용자의 발화에 가장 적합한 도메인을 결정하는 방법에 관하여 논한다. 다중 도메인 대화 시스템 구축 시, 도메인의 확장성 재고 및 각 도메인 별 특성의 효과적인 적용을 위해, 각 도메인 별 단일 도메인 대화 전문가를 구축하고, 다수의 도메인 대화 전문가들을 통합함으로써, 다양한 도메인을 처리할 수 있는 대화 시스템을 구축할 수 있다. 이 때, 자연스러운 대화 처리를 위해서 주어진 사용자의 발화에 가장 적합한 도메인을 결정하는 기술이 필요하다. 본 논문에서는 사용자 발화의 의도 분석 결과 및 이전 사용자 발화 도메인 정보를 이용하는 구축된 분류 모델에 기반한 도메인 결정 방법을 제안하고, 실험 결과를 통해 제안된 방법의 효과를 검증한다.

  • PDF

A Theoretical Analysis on Volatile Matter Release from Different Coals Using CPD Model During a Coal Gasification (CPD 모델을 활용한 석탄 가스화 과정 중 탄종에 따른 휘발분 배출에 관한 이론해석연구)

  • Kim, Ryang-Gyoon;Lee, Byoung-Hwa;Jeon, Chung-Hwan;Chang, Young-June;Song, Ju-Hun
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.33 no.12
    • /
    • pp.1000-1006
    • /
    • 2009
  • Integrated Coal Gasification Combined Cycle (IGCC) power plants have been developed to reduce carbon dioxide emissions and to increase the efficiency of electricity generation. A devolatilization process of entrained coal gasification is predicted by CPD model which could describe the devolatilization behavior of rapidly heated coal based on the chemical structure of the coal. This paper is intended to compare the mass release behavior of char, tar and gas(CO, $CO_2,\;H_2O,\;CH_4$) for three different coals. The influence of coal structure on gas evolution is examined over the pressure range of 10${\sim}$30atm.

Real-Time Lip Reading System Implementation Based on Deep Learning (딥러닝 기반의 실시간 입모양 인식 시스템 구현)

  • Cho, Dong-Hun;Kim, Won-Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.267-269
    • /
    • 2020
  • 입모양 인식(Lip Reading) 기술은 입술 움직임을 통해 발화를 분석하는 기술이다. 본 논문에서는 일상적으로 사용하는 10개의 상용구에 대해서 발화자의 안면 움직임 분석을 통해 실시간으로 분류하는 연구를 진행하였다. 시간상의 연속된 순서를 가진 영상 데이터의 특징을 고려하여 3차원 합성곱 신경망 (Convolutional Neural Network)을 사용하여 진행하였지만, 실시간 시스템 구현을 위해 연산량 감소가 필요했다. 이를 해결하기 위해 차 영상을 이용한 2차원 합성곱 신경망과 LSTM 순환 신경망 (Long Short-Term Memory) 결합 모델을 설계하였고, 해당 모델을 이용하여 실시간 시스템 구현에 성공하였다.

  • PDF

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Study on Methods of Speacker Adaptation for Speech Recognition (음성인식을 위한 화자적응화 기법에 관한 연구)

  • 이종연
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.309.2-314
    • /
    • 1998
  • 본 연구에서는 음성인식을 위한 화자적응화 기법에 대해 연구하였다. 첫째로 적응화에 포함되지 않은 카테고리 음절에 대해 적응화 효과를 줄 수 있는 보간적응화 방법에 대해 연구하였다. 표준모델과 소량의 음성 데이터만으로 적응화가 가능한 MAPE(최대사후확률추정)으로 적응화한 모델의 평균벡터 변화정도를 적응화 발화에 포함되지 않은 모델에 보간적응하는 방법이다. 둘째로 음절단위 모델을 구축한 후 적응화 하고자 하는 화자의 데이터를 연결학습법과 Viterbi 알고리즘으로 음절단위의 추출을 자동화 한 후 MAPE으로 적응화하는 방법에 대해 각각 실험을 하였다.

  • PDF