• Title/Summary/Keyword: speech act classification

Search Result 14, Processing Time 0.029 seconds

Effective Korean Speech-act Classification Using the Classification Priority Application and a Post-correction Rules (분류 우선순위 적용과 후보정 규칙을 이용한 효과적인 한국어 화행 분류)

  • Song, Namhoon;Bae, Kyoungman;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.80-86
    • /
    • 2016
  • A speech-act is a behavior intended by users in an utterance. Speech-act classification is important in a dialogue system. The machine learning and rule-based methods have mainly been used for speech-act classification. In this paper, we propose a speech-act classification method based on the combination of support vector machine (SVM) and transformation-based learning (TBL). The user's utterance is first classified by SVM that is preferentially applied to categories with a low utterance rate in training data. Next, when an utterance has negative scores throughout the whole of the categories, the utterance is applied to the correction phase by rules. The results from our method were higher performance over the baseline system long with error-reduction.

A Domain Action Classification Model Using Conditional Random Fields (Conditional Random Fields를 이용한 영역 행위 분류 모델)

  • Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2007
  • In a goal-oriented dialogue, speakers' intentions can be represented by domain actions that consist of pairs of a speech act and a concept sequence. Therefore, if we plan to implement an intelligent dialogue system, it is very important to correctly infer the domain actions from surface utterances. In this paper, we propose a statistical model to determine speech acts and concept sequences using conditional random fields at the same time. To avoid biased learning problems, the proposed model uses low-level linguistic features such as lexicals and parts-of-speech. Then, it filters out uninformative features using the chi-square statistic. In the experiments in a schedule arrangement domain, the proposed system showed good performances (the precision of 93.0% on speech act classification and the precision of 90.2% on concept sequence classification).

  • PDF

Review of Korean Speech Act Classification: Machine Learning Methods

  • Kim, Hark-Soo;Seon, Choong-Nyoung;Seo, Jung-Yun
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.288-293
    • /
    • 2011
  • To resolve ambiguities in speech act classification, various machine learning models have been proposed over the past 10 years. In this paper, we review these machine learning models and present the results of experimental comparison of three representative models, namely the decision tree, the support vector machine (SVM), and the maximum entropy model (MEM). In experiments with a goal-oriented dialogue corpus in the schedule management domain, we found that the MEM has lighter hardware requirements, whereas the SVM has better performance characteristics.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF

CNN Based Speech-act Classification Using Sentence Types and Modalities (문장 유형과 양태 정보를 이용한 합성곱 신경망 기반의 대화체 발화 화행 분석)

  • Park, Yongsin;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.642-644
    • /
    • 2018
  • 화행(Speech-act)이란 어떤 목적을 달성하기 위해 발화를 통해 이루어지는 화자의 행위를 뜻하며, 화행 분석(Speech-act analysis)이란 주어진 발화의 화행을 결정하는 것을 뜻한다. 문장 유형과 양태는 화행의 일종으로, 문장 유형의 경우 화자의 기본적인 발화 의도에 따라 평서문, 명령문, 청유문, 의문문, 감탄문의 다섯 가지 유형으로 나눌 수 있고, 양태는 문장이 표현하는 명제나, 명제가 기술하는 상황에 대해서 화자가 갖는 의견이나 태도를 말한다. 본 논문에서는 종결어미와 보조용언으로부터 비교적 간단하게 추출 가능한 문장 유형과 양태 정보를 활용하여 대화체 발화문의 화행 분석 성능을 높이는 방법을 보인다. 본 논문에서 제안하는 모델은 합성곱 신경망(CNN)을 사용한 기본 모델에 비해 0.52%p 성능 향상을 보였다.

  • PDF

Meaning and Intonation of Endings with Polysemous Modality: Through the Analysis of the Spontaneous Speech (인식·행위 양태 다의성 어미의 의미와 억양 -구어 자유발화 분석을 통하여-)

  • Jo, Min-ha
    • Korean Linguistics
    • /
    • v.77
    • /
    • pp.331-357
    • /
    • 2017
  • The purpose of this paper is to identify the workings of intonation realized in the endings through the spoken language. To achieve this objective, this paper has analyzed 300 minutes of spontaneous speech by women from Seoul and discussed the meanings of modality and their relationship with intonation. Intonation functions significantly in polysemous modal endings in epistemic and act modality. Epistemic modality is usually expressed through indirect and soft intonations such as L:, M: and LH, whereas act modality is expressed through direct and strong intonations such as H, HL and LHL. Intonation appears to be related to the Certainty degree of information, rather than classification of modality, Lengthening relate to indirectness, H with uncertainty, L with statements or affirmation, and HL and LHL relates to assertive attitude. This paper is significant as it has overcome the abstractness of existing modality studies and has engaged in objective and comprehensive analysis with actual spontaneous speech data.

Effective Text Question Analysis for Goal-oriented Dialogue (목적 지향 대화를 위한 효율적 질의 의도 분석에 관한 연구)

  • Kim, Hakdong;Go, Myunghyun;Lim, Heonyeong;Lee, Yurim;Jee, Minkyu;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.48-57
    • /
    • 2019
  • The purpose of this study is to understand the intention of the inquirer from the single text type question in Goal-oriented dialogue. Goal-Oriented Dialogue system means a dialogue system that satisfies the user's specific needs via text or voice. The intention analysis process is a step of analysing the user's intention of inquiry prior to the answer generation, and has a great influence on the performance of the entire Goal-Oriented Dialogue system. The proposed model was used for a daily chemical products domain and Korean text data related to the domain was used. The analysis is divided into a speech-act which means independent on a specific field concept-sequence and which means depend on a specific field. We propose a classification method using the word embedding model and the CNN as a method for analyzing speech-act and concept-sequence. The semantic information of the word is abstracted through the word embedding model, and concept-sequence and speech-act classification are performed through the CNN based on the semantic information of the abstract word.

Speakers' Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network (Convolutional Neural Network에서 공유 계층의 부분 학습에 기반 한 화자 의도 분석)

  • Kim, Minkyoung;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1252-1257
    • /
    • 2017
  • In dialogues, speakers' intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.

Dialogue Act Classification for Non-Task-Oriented Korean Dialogues (도메인에 비종속적인 대화에서의 화행 분류)

  • Kim, Min-Jeong;Han, Kyoung-Soo;Park, Jae-Hyun;Song, Young-In;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.246-253
    • /
    • 2006
  • 대화 에이전트와 관련된 지금까지의 연구는 대개 대상 도메인을 한정하고, 특정 목적을 달성하기 위해 사용자와 대화할 수 있는 에이전트에 관한 연구가 많았다. 본 연구에서는 도메인이 한정되지 않은 일반 도메인 대화에서 화행(speech act)정보를 수동으로 부착시켜 구축한 말뭉치에 대해 소개하고 이 말뭉치를 토대로 자동으로 화행을 분류할 수 있는 유용한 자질들을 선보인다. 그리고 도메인이 한정된 말뭉치와 도메인이 한정되지 않은 말뭉치를 자동으로 화행분류해 본 실험한 결과를 비교하였다.

  • PDF

Emotion and Speech Act classification in Dialogue using Multitask Learning (대화에서 멀티태스크 학습을 이용한 감정 및 화행 분류)

  • Shin, Chang-Uk;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.532-536
    • /
    • 2018
  • 심층인공신경망을 이용한 대화 모델링 연구가 활발하게 진행되고 있다. 본 논문에서는 대화에서 발화의 감정과 화행을 분류하기 위해 멀티태스크(multitask) 학습을 이용한 End-to-End 시스템을 제안한다. 우리는 감정과 화행을 동시에 분류하는 시스템을 개발하기 위해 멀티태스크 학습을 수행한다. 또한 불균형 범주 분류를 위해 계단식분류(cascaded classification) 구조를 사용하였다. 일상대화 데이터셋을 사용하여 실험을 수행하였고 macro average precision으로 성능을 측정하여 감정 분류 60.43%, 화행 분류 74.29%를 각각 달성하였다. 이는 baseline 모델 대비 각각 29.00%, 1.54% 향상된 성능이다. 본 논문에서는 제안하는 구조를 이용하여, 발화의 감정 및 화행 분류가 End-to-End 방식으로 모델링 가능함을 보였다. 그리고, 두 분류 문제를 하나의 구조로 적절히 학습하기 위한 방법과 분류 문제에서의 범주 불균형 문제를 해결하기 위한 분류 방법을 제시하였다.

  • PDF