• 제목/요약/키워드: Voice and Text Analysis

검색결과 68건 처리시간 0.031초

유튜브 댓글을 통해 살펴본 버추얼 인플루언서에 대한 인식 연구 -캐릭터 디자인에 대한 긍부정 감성 반응을 중심으로- (A Study on Perceptions of Virtual Influencers through YouTube Comments -Focusing on Positive and Negative Emotional Responses Toward Character Design-)

  • 안효선;김지영
    • 한국의류학회지
    • /
    • 제47권5호
    • /
    • pp.873-890
    • /
    • 2023
  • This study analyzed users' emotional responses to VI character design through YouTube comments. The researchers applied text-mining to analyze 116,375 comments, focusing on terms related to character design and characteristics of VI. Using the BERT model in sentiment analysis, we classified comments into extremely negative, negative, neutral, positive, or extremely positive sentiments. Next, we conducted a co-occurrence frequency analysis on comments with extremely negative and extremely positive responses to examine the semantic relationships between character design and emotional characteristic terms. We also performed a content analysis of comments about Miquela and Shudu to analyze the perception differences regarding the two character designs. The results indicate that form elements (e.g., voice, face, and skin) and behavioral elements (e.g., speaking, interviewing, and reacting) are vital in eliciting users' emotional responses. Notably, in the negative responses, users focused on the humanization aspect of voice and the authenticity aspect of behavior in speaking, interviewing, and reacting. Furthermore, we found differences in the character design elements and characteristics that users expect based on the VI's field of activity. As a result, this study suggests applications to character design to accommodate these variations.

자연음 TTS(Text-To-Speech) 엔진 구현 (Implementation of TTS Engine for Natural Voice)

  • 조정호;김태은;임재환
    • 디지털콘텐츠학회 논문지
    • /
    • 제4권2호
    • /
    • pp.233-242
    • /
    • 2003
  • TTS(Text-To-Speech) 시스템은 텍스트 문장을 자연스러운 음성으로 출력하는 시스템이다. 자연스러운 음성을 출력하기 위해서 언어에 대한 전문적 지식을 비롯하여 많은 시간과 노력이 요구된다. 또한 영어의 음운 변환은 음소에 따라 형태소에 따라 의미에 따라 다양한 변환을 가진다. 이를 일괄적으로 처리하기란 매우 힘든 일이다. 이러한 문제들을 해결하기 위하여 모음과 자음의 변화의 규칙을 적용한 시스템을 구현한다. 이 시스템은 문장의 분석을 통해 분류하고 음소 규칙 데이터를 통해 자연스러운 음성을 출력하게 되는 이전 과정을 통해 특수문자나 숫자 등을 정규화하여 처리한다. 이렇게 처리된 문자 데이터를 운율규칙을 통해 최종 출력한다. 그 결과, 40개의 음소 규칙 데이터를 통해 보다. 정확한 음성을 출력할 수 있었으며, 시스템의 효율성도 높였다. 본 논문에서 제시한 시스템은 각종 통신장비와 자동화기기에 적용하여 다양한 분야에 활용될 수 있을 것이다.

  • PDF

대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용 (Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity)

  • 이정원;임일
    • 지능정보연구
    • /
    • 제29권3호
    • /
    • pp.267-286
    • /
    • 2023
  • 인공지능 스피커로 대표되는 대화형 에이전트는 사람-컴퓨터 간 대화형이기 때문에 대화 상황에서 오류가 발생하는 경우가 잦다. 에이전트 사용자의 발화 기록에서 인식오류는 사용자의 발화를 제대로 인식하지 못하는 미인식오류 유형과 발화를 인식하여 서비스를 제공하였으나 사용자가 의도한 바와 다르게 인식된 오인식오류 유형으로 나뉜다. 이 중 오인식오류의 경우, 서비스가 제공된 것으로 기록되기 때문에 이에 대한 오류 탐지가 별도로 필요하다. 본 연구에서는 텍스트 마이닝 기법 중에서도 단어와 문서를 벡터로 바꿔주는 단어 임베딩과 문서 임베딩을 이용하여 단순 사용된 단어 기반의 유사도 산출이 아닌 단어의 분리 방식을 다양하게 적용함으로써 연속 발화 쌍의 유사도를 기반으로 새로운 오인식오류 및 신조어 탐지 방법을 탐구하였다. 연구 방법으로는 실제 사용자 발화 기록을 활용하여 오인식오류의 패턴을 모델 학습 및 생성 시 적용하여 탐지 모델을 구현하였다. 그 결과, 오인식오류의 가장 큰 원인인 등록되지 않은 신조어 사용을 탐지할 수 있는 패턴 방식으로 다양한 단어 분리 방식 중 초성 추출 방식이 가장 좋은 결과를 보임을 확인하였다. 본 연구는 크게 두 개의 함의를 가진다. 첫째, 인식오류로 기록되지 않아 탐지가 어려운 오인식오류에 대하여 다양한 방식 별 비교를 통해 최적의 방식을 찾았다. 둘째, 이를 실제 신조어 탐지 적용이 필요한 대화형 에이전트나 음성 인식 서비스에 적용한다면 음성 인식 단계에서부터 발생하는 오류의 패턴도 구체화할 수 있으며, 오류로 분류되지 않더라도 사용자가 원하는 결과에 맞는 서비스가 제공될 수 있음을 보였다.

성대마비의 음성장애 측정을 위한 청지각적 및 음향학적 평가 (Auditory-Perceptual and Acoustic Evaluation in Measuring Dysphonia Severity of Vocal Cord Paralysis)

  • 김근효;이연우;박희준;배인호;이병주;권순복
    • 대한후두음성언어의학회지
    • /
    • 제28권2호
    • /
    • pp.106-111
    • /
    • 2017
  • Background and Objectives : The purpose of this study was to investigate the criterion-related concurrent validity of two standardized auditory-perceptual assessments and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in patients with vocal cord paralysis (VCP). Materials and Methods : Total 210 patients with VCP and 236 normal voice subjects were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk". A 2 second mid-vowel portion of the sustained vowel and two sentences (with 26 syllables) were recorded. And then voice samples were edited, concatenated, and analyzed according to Praat script. Two standardized auditory-perceptual assessment (GRBAS and CAPE-V) were performed by three raters. Results : The VCP group showed higher AVQI, Grade (G) and Overall Severity (OS) values than normal voice group. And the correlation among AVQI, G, and OS ranged from 0.904 to 0.926. In ROC curve analysis, cutoff values of AVQI, G, and OS were <3.79, <0.00, and <30.00, respectively, and the AUC of each analysis was over .89. Conclusion : AVQI and auditory evaluation can improve the early screening ability of VCP voice and help to establish effective diagnosis and treatment plan for VCP-related dysphonia.

  • PDF

HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석 (Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis)

  • 민소연;나덕수
    • 한국산학기술학회논문지
    • /
    • 제15권9호
    • /
    • pp.5763-5768
    • /
    • 2014
  • 하나의 합성기에서 감정이 표현되지 않는 기본 음성과 여러 감정 음성을 함께 합성하는 경우 음색을 유지하는 것이 중요해 진다. 감정이 과도하게 표현된 녹음 음성을 사용하여 합성기를 구현하는 경우 음색이 유지되지 못해 각 합성음이 서로 다른 화자의 음성처럼 들릴 수 있다. 본 논문에서는 감정 레벨을 조절하는 HMM 기반 음성 합성기를 구현하기 위해 구축한 음성데이터의 음색 변화를 분석하였다. 음성 합성기를 구현하기 위해서는 음성을 녹음하여 데이터베이스를 구축하게 되는데, 감정 음성 합성기를 구현하기 위해서는 특히 녹음 과정이 매우 중요하다. 감정을 정의하고 레벨을 유지하는 것은 매우 어렵기 때문에 모니터링이 잘 이루어져야 한다. 음성 데이터베이스는 일반 음성과 기쁨(Happiness), 슬픔(Sadness), 화남(Anger)의 감정 음성으로 구성하였고, 각 감정은 High/Low의 2가지 레벨로 구별하여 녹음하였다. 기본음성과 감정 음성의 음색 유사도 측정을 위해 대표 모음들의 각각의 스펙트럼을 누적하여 평균 스펙트럼을 구하고, 평균 스펙트럼에서 F1(제 1포만트)을 측정하였다. 감정 음성과 일반 음성의 음색 유사도는 Low-level의 감정 데이터가 High-level의 데이터 보다 우수하였고, 제안한 방법이 이러한 감정 음성의 음색 변화를 모니터링 할 수 있는 방법이 될 수 있음을 확인할 수 있었다.

한국어 text-to-speech(TTS) 시스템을 위한 엔드투엔드 합성 방식 연구 (An end-to-end synthesis method for Korean text-to-speech systems)

  • 최연주;정영문;김영관;서영주;김회린
    • 말소리와 음성과학
    • /
    • 제10권1호
    • /
    • pp.39-48
    • /
    • 2018
  • A typical statistical parametric speech synthesis (text-to-speech, TTS) system consists of separate modules, such as a text analysis module, an acoustic modeling module, and a speech synthesis module. This causes two problems: 1) expert knowledge of each module is required, and 2) errors generated in each module accumulate passing through each module. An end-to-end TTS system could avoid such problems by synthesizing voice signals directly from an input string. In this study, we implemented an end-to-end Korean TTS system using Google's Tacotron, which is an end-to-end TTS system based on a sequence-to-sequence model with attention mechanism. We used 4392 utterances spoken by a Korean female speaker, an amount that corresponds to 37% of the dataset Google used for training Tacotron. Our system obtained mean opinion score (MOS) 2.98 and degradation mean opinion score (DMOS) 3.25. We will discuss the factors which affected training of the system. Experiments demonstrate that the post-processing network needs to be designed considering output language and input characters and that according to the amount of training data, the maximum value of n for n-grams modeled by the encoder should be small enough.

Research on Developing a Conversational AI Callbot Solution for Medical Counselling

  • Won Ro LEE;Jeong Hyon CHOI;Min Soo KANG
    • 한국인공지능학회지
    • /
    • 제11권4호
    • /
    • pp.9-13
    • /
    • 2023
  • In this study, we explored the potential of integrating interactive AI callbot technology into the medical consultation domain as part of a broader service development initiative. Aimed at enhancing patient satisfaction, the AI callbot was designed to efficiently address queries from hospitals' primary users, especially the elderly and those using phone services. By incorporating an AI-driven callbot into the hospital's customer service center, routine tasks such as appointment modifications and cancellations were efficiently managed by the AI Callbot Agent. On the other hand, tasks requiring more detailed attention or specialization were addressed by Human Agents, ensuring a balanced and collaborative approach. The deep learning model for voice recognition for this study was based on the Transformer model and fine-tuned to fit the medical field using a pre-trained model. Existing recording files were converted into learning data to perform SSL(self-supervised learning) Model was implemented. The ANN (Artificial neural network) neural network model was used to analyze voice signals and interpret them as text, and after actual application, the intent was enriched through reinforcement learning to continuously improve accuracy. In the case of TTS(Text To Speech), the Transformer model was applied to Text Analysis, Acoustic model, and Vocoder, and Google's Natural Language API was applied to recognize intent. As the research progresses, there are challenges to solve, such as interconnection issues between various EMR providers, problems with doctor's time slots, problems with two or more hospital appointments, and problems with patient use. However, there are specialized problems that are easy to make reservations. Implementation of the callbot service in hospitals appears to be applicable immediately.

한국도로공사 VOC 데이터를 이용한 토픽 모형 적용 방안 (Application of a Topic Model on the Korea Expressway Corporation's VOC Data)

  • 김지원;박상민;박성호;정하림;윤일수
    • 한국IT서비스학회지
    • /
    • 제19권6호
    • /
    • pp.1-13
    • /
    • 2020
  • Recently, 80% of big data consists of unstructured text data. In particular, various types of documents are stored in the form of large-scale unstructured documents through social network services (SNS), blogs, news, etc., and the importance of unstructured data is highlighted. As the possibility of using unstructured data increases, various analysis techniques such as text mining have recently appeared. Therefore, in this study, topic modeling technique was applied to the Korea Highway Corporation's voice of customer (VOC) data that includes customer opinions and complaints. Currently, VOC data is divided into the business areas of Korea Expressway Corporation. However, the classified categories are often not accurate, and the ambiguous ones are classified as "other". Therefore, in order to use VOC data for efficient service improvement and the like, a more systematic and efficient classification method of VOC data is required. To this end, this study proposed two approaches, including method using only the latent dirichlet allocation (LDA), the most representative topic modeling technique, and a new method combining the LDA and the word embedding technique, Word2vec. As a result, it was confirmed that the categories of VOC data are relatively well classified when using the new method. Through these results, it is judged that it will be possible to derive the implications of the Korea Expressway Corporation and utilize it for service improvement.

구개인두부전증 환자의 한국어 음성 코퍼스 구축 방안 연구 (Research on Construction of the Korean Speech Corpus in Patient with Velopharyngeal Insufficiency)

  • 이지은;김욱은;김광현;성명훈;권택균
    • Korean Journal of Otorhinolaryngology-Head and Neck Surgery
    • /
    • 제55권8호
    • /
    • pp.498-507
    • /
    • 2012
  • Background and Objectives We aimed to develop a Korean version of the velopharyngeal insufficiency (VPI) speech corpus system. Subjects and Method After developing a 3-channel simultaneous speech recording device capable of recording nasal/oral and normal compound speech separately, voice data were collected from VPI patients aged more than 10 years with/without the history of operation or prior speech therapy. This was compared to a control group for which VPI was simulated by using a french-3 nelaton tube inserted via both nostril through nasopharynx and pulling the soft palate anteriorly in varying degrees. The study consisted of three transcriptors: a speech therapist transcribed the voice file into text, a second transcriptor graded speech intelligibility and severity and the third tagged the types and onset times of misarticulation. The database were composed of three main tables regarding (1) speaker's demographics, (2) condition of the recording system and (3) transcripts. All of these were interfaced with the Praat voice analysis program, which enables the user to extract exact transcribed phrases for analysis. Results In the simulated VPI group, the higher the severity of VPI, the higher the nasalance score was obtained. In addition, we could verify the vocal energy that characterizes hypernasality and compensation in nasal/oral and compound sounds spoken by VPI patients as opposed to that characgerizes the normal control group. Conclusion With the Korean version of VPI speech corpus system, patients' common difficulties and speech tendencies in articulation can be objectively evaluated. Comparing these data with those of the normal voice, mispronunciation and dysarticulation of patients with VPI can be corrected.

TF-IDF를 활용한 한글 자연어 처리 연구 (A study on Korean language processing using TF-IDF)

  • 이종화;이문봉;김종원
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제28권3호
    • /
    • pp.105-121
    • /
    • 2019
  • Purpose One of the reasons for the expansion of information systems in the enterprise is the increased efficiency of data analysis. In particular, the rapidly increasing data types which are complex and unstructured such as video, voice, images, and conversations in and out of social networks. The purpose of this study is the customer needs analysis from customer voices, ie, text data, in the web environment.. Design/methodology/approach As previous study results, the word frequency of the sentence is extracted as a word that interprets the sentence has better affects than frequency analysis. In this study, we applied the TF-IDF method, which extracts important keywords in real sentences, not the TF method, which is a word extraction technique that expresses sentences with simple frequency only, in Korean language research. We visualized the two techniques by cluster analysis and describe the difference. Findings TF technique and TF-IDF technique are applied for Korean natural language processing, the research showed the value from frequency analysis technique to semantic analysis and it is expected to change the technique by Korean language processing researcher.