• Title/Summary/Keyword: Dialogue Act Classification

Search Result 8, Processing Time 0.021 seconds

A Comparative Study on Optimal Feature Identification and Combination for Korean Dialogue Act Classification (한국어 화행 분류를 위한 최적의 자질 인식 및 조합의 비교 연구)

  • Kim, Min-Jeong;Park, Jae-Hyun;Kim, Sang-Bum;Rim, Hae-Chang;Lee, Do-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.681-691
    • /
    • 2008
  • In this paper, we have evaluated and compared each feature and feature combinations necessary for statistical Korean dialogue act classification. We have implemented a Korean dialogue act classification system by using the Support Vector Machine method. The experimental results show that the POS bigram does not work well and the morpheme-POS pair and other features can be complementary to each other. In addition, a small number of features, which are selected by a feature selection technique such as chi-square, are enough to show steady performance of dialogue act classification. We also found that the last eojeol plays an important role in classifying an entire sentence, and that Korean characteristics such as free order and frequent subject ellipsis can affect the performance of dialogue act classification.

A Domain Action Classification Model Using Conditional Random Fields (Conditional Random Fields를 이용한 영역 행위 분류 모델)

  • Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2007
  • In a goal-oriented dialogue, speakers' intentions can be represented by domain actions that consist of pairs of a speech act and a concept sequence. Therefore, if we plan to implement an intelligent dialogue system, it is very important to correctly infer the domain actions from surface utterances. In this paper, we propose a statistical model to determine speech acts and concept sequences using conditional random fields at the same time. To avoid biased learning problems, the proposed model uses low-level linguistic features such as lexicals and parts-of-speech. Then, it filters out uninformative features using the chi-square statistic. In the experiments in a schedule arrangement domain, the proposed system showed good performances (the precision of 93.0% on speech act classification and the precision of 90.2% on concept sequence classification).

  • PDF

Effective Text Question Analysis for Goal-oriented Dialogue (목적 지향 대화를 위한 효율적 질의 의도 분석에 관한 연구)

  • Kim, Hakdong;Go, Myunghyun;Lim, Heonyeong;Lee, Yurim;Jee, Minkyu;Kim, Wonil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.48-57
    • /
    • 2019
  • The purpose of this study is to understand the intention of the inquirer from the single text type question in Goal-oriented dialogue. Goal-Oriented Dialogue system means a dialogue system that satisfies the user's specific needs via text or voice. The intention analysis process is a step of analysing the user's intention of inquiry prior to the answer generation, and has a great influence on the performance of the entire Goal-Oriented Dialogue system. The proposed model was used for a daily chemical products domain and Korean text data related to the domain was used. The analysis is divided into a speech-act which means independent on a specific field concept-sequence and which means depend on a specific field. We propose a classification method using the word embedding model and the CNN as a method for analyzing speech-act and concept-sequence. The semantic information of the word is abstracted through the word embedding model, and concept-sequence and speech-act classification are performed through the CNN based on the semantic information of the abstract word.

Review of Korean Speech Act Classification: Machine Learning Methods

  • Kim, Hark-Soo;Seon, Choong-Nyoung;Seo, Jung-Yun
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.288-293
    • /
    • 2011
  • To resolve ambiguities in speech act classification, various machine learning models have been proposed over the past 10 years. In this paper, we review these machine learning models and present the results of experimental comparison of three representative models, namely the decision tree, the support vector machine (SVM), and the maximum entropy model (MEM). In experiments with a goal-oriented dialogue corpus in the schedule management domain, we found that the MEM has lighter hardware requirements, whereas the SVM has better performance characteristics.

Effective Korean Speech-act Classification Using the Classification Priority Application and a Post-correction Rules (분류 우선순위 적용과 후보정 규칙을 이용한 효과적인 한국어 화행 분류)

  • Song, Namhoon;Bae, Kyoungman;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.80-86
    • /
    • 2016
  • A speech-act is a behavior intended by users in an utterance. Speech-act classification is important in a dialogue system. The machine learning and rule-based methods have mainly been used for speech-act classification. In this paper, we propose a speech-act classification method based on the combination of support vector machine (SVM) and transformation-based learning (TBL). The user's utterance is first classified by SVM that is preferentially applied to categories with a low utterance rate in training data. Next, when an utterance has negative scores throughout the whole of the categories, the utterance is applied to the correction phase by rules. The results from our method were higher performance over the baseline system long with error-reduction.

Dialogue Act Classification for Non-Task-Oriented Korean Dialogues (도메인에 비종속적인 대화에서의 화행 분류)

  • Kim, Min-Jeong;Han, Kyoung-Soo;Park, Jae-Hyun;Song, Young-In;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.246-253
    • /
    • 2006
  • 대화 에이전트와 관련된 지금까지의 연구는 대개 대상 도메인을 한정하고, 특정 목적을 달성하기 위해 사용자와 대화할 수 있는 에이전트에 관한 연구가 많았다. 본 연구에서는 도메인이 한정되지 않은 일반 도메인 대화에서 화행(speech act)정보를 수동으로 부착시켜 구축한 말뭉치에 대해 소개하고 이 말뭉치를 토대로 자동으로 화행을 분류할 수 있는 유용한 자질들을 선보인다. 그리고 도메인이 한정된 말뭉치와 도메인이 한정되지 않은 말뭉치를 자동으로 화행분류해 본 실험한 결과를 비교하였다.

  • PDF

Emotion and Speech Act classification in Dialogue using Multitask Learning (대화에서 멀티태스크 학습을 이용한 감정 및 화행 분류)

  • Shin, Chang-Uk;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.532-536
    • /
    • 2018
  • 심층인공신경망을 이용한 대화 모델링 연구가 활발하게 진행되고 있다. 본 논문에서는 대화에서 발화의 감정과 화행을 분류하기 위해 멀티태스크(multitask) 학습을 이용한 End-to-End 시스템을 제안한다. 우리는 감정과 화행을 동시에 분류하는 시스템을 개발하기 위해 멀티태스크 학습을 수행한다. 또한 불균형 범주 분류를 위해 계단식분류(cascaded classification) 구조를 사용하였다. 일상대화 데이터셋을 사용하여 실험을 수행하였고 macro average precision으로 성능을 측정하여 감정 분류 60.43%, 화행 분류 74.29%를 각각 달성하였다. 이는 baseline 모델 대비 각각 29.00%, 1.54% 향상된 성능이다. 본 논문에서는 제안하는 구조를 이용하여, 발화의 감정 및 화행 분류가 End-to-End 방식으로 모델링 가능함을 보였다. 그리고, 두 분류 문제를 하나의 구조로 적절히 학습하기 위한 방법과 분류 문제에서의 범주 불균형 문제를 해결하기 위한 분류 방법을 제시하였다.

  • PDF

Speakers' Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network (Convolutional Neural Network에서 공유 계층의 부분 학습에 기반 한 화자 의도 분석)

  • Kim, Minkyoung;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1252-1257
    • /
    • 2017
  • In dialogues, speakers' intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.