• Title/Summary/Keyword: Conversational Agents

Search Result 22, Processing Time 0.026 seconds

Trends in the AI-based Banking Conversational Agents Literature: A Bibliometric Review

  • Eden Samuel Parthiban;Mohd. Adil
    • Asia pacific journal of information systems
    • /
    • v.33 no.3
    • /
    • pp.702-736
    • /
    • 2023
  • Artificial Intelligence (AI) and the technologies powered by AI fuel the fourth industrial revolution. Being the primary adopter of such innovations, banking has recently started using the most common AI-based technology, i.e., conversational agents. Although research extensively focuses on this niche area and provides bibliometric understanding for such agents in other industries, a similar review with scientometric insights of the banking literature concerning AI conversational agents is absent till date. Furthermore, in the era following the pandemic, banks are faced with the imperative to provide solutions that align with the changing landscape of remote consumer behavior. As a result, banks are proactively integrating technology-driven solutions, such as automated agents, to effectively address the growing demand for remote customer support. Hence more research is needed to perfect such agents. In order to bridge these existing gaps, the present study undertook a comprehensive examination of two decades' worth of banking literature. A meticulous review was conducted, analyzing approximately 116 papers published from 2003 to 2023. The aim was to provide a scientometric overview of the topic, catering to the research needs of both academic and industrial professionals. Holistically, the study seeks to present a macro-view about the existing trends in AI based banking conversational agents' literature while focusing on quantity, qualitative and structural indicators that are effectively necessary to offer new directions for the AI-based banking solutions. Our study, therefore, presents insights surrounding the literature, using selected techniques related to performance analysis and science mapping.

Learning Conversation in Conversational Agent Using Knowledge Acquisition based on Speech-act Templates and Sentence Generation with Genetic Programming (화행별 템플릿 기반의 지식획득 기법과 유전자 프로그래밍을 이용한 문장 생성 기법을 통한 대화형 에이전트의 대화 학습)

  • Lim Sungsoo;Hong Jin-Hyuk;Cho Sung-Bae
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.351-368
    • /
    • 2005
  • The manual construction of the knowledge-base takes much time and effort, and it is hard to adjust intelligence systems to dynamic and flexible environment. Thus mental development in those systems has been investigated in recent years. Autonomous mental development is a new paradigm for developing autonomous machines, which are adaptive and flexible to the environment. Learning conversation, a kind of mental development, is an important aspect of conversational agents. In this paper, we propose a learning conversation method for conversational agents which uses several promising techniques; speech-act templates and genetic programming. Knowledge acquisition of conversational agents is implemented by finite state machines and templates, and dynamic sentence generation is implemented by genetic programming Several illustrations and usability tests how the usefulness of the proposed method.

  • PDF

Effects of self-disclosure in conversational agents - Comparison of task- and social-oriented dialogues -

  • Lee, Kahyun;Choi, Kee-eun;Choi, Junho
    • Design Convergence Study
    • /
    • v.18 no.3
    • /
    • pp.71-87
    • /
    • 2019
  • Previous research has shown that the use of self-disclosure, the process of revealing personal thoughts and feelings, in conversational agents (CAs) increases overall user evaluations. However, research exploring the effects of self-disclosure in different situations or dialogue types is limited. This study investigated the effects of self-disclosure and dialogue type (task- vs. social-oriented) on trust, usefulness, and usage intention. Results showed significant interaction effects between self-disclosure and dialogue type. For CAs that did not use self-disclosure, trust, usefulness, and usage intention were higher in task-oriented dialogues. In contrast, CAs that did use self-disclosure had higher trust, usefulness, and usage intention in social-oriented dialogues. These results suggest that researchers and designers should consider the specific dialogue types and corresponding user goals when adding human qualities, such as self-disclosure, to CAs.

Error Analysis of Recent Conversational Agent-based Commercialization Education Platform (최신 대화형 에이전트 기반 상용화 교육 플랫폼 오류 분석)

  • Lee, Seungjun;Park, Chanjun;Seo, Jaehyung;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.11-22
    • /
    • 2022
  • Recently, research and development using various Artificial Intelligence (AI) technologies are being conducted in the field of education. Among the AI in Education (AIEd), conversational agents are not limited by time and space, and can learn more effectively by combining them with various AI technologies such as voice recognition and translation. This paper conducted a trend analysis on platforms that have a large number of users and used conversational agents for English learning among commercialized application. Currently commercialized educational platforms using conversational agent through trend analysis has several limitations and problems. To analyze specific problems and limitations, a comparative experiment was conducted with the latest pre-trained large-capacity dialogue model. Sensibleness and Specificity Average (SSA) human evaluation was conducted to evaluate conversational human-likeness. Based on the experiment, this paper propose the need for trained with large-capacity parameters dialogue models, educational data, and information retrieval functions for effective English conversation learning.

Applying Social Strategies for Breakdown Situations of Conversational Agents: A Case Study using Forewarning and Apology (대화형 에이전트의 오류 상황에서 사회적 전략 적용: 사전 양해와 사과를 이용한 사례 연구)

  • Lee, Yoomi;Park, Sunjeong;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.59-70
    • /
    • 2018
  • With the breakthrough of speech recognition technology, conversational agents have become pervasive through smartphones and smart speakers. The recognition accuracy of speech recognition technology has developed to the level of human beings, but it still shows limitations on understanding the underlying meaning or intention of words, or understanding long conversation. Accordingly, the users experience various errors when interacting with the conversational agents, which may negatively affect the user experience. In addition, in the case of smart speakers with a voice as the main interface, the lack of feedback on system and transparency was reported as the main issue when the users using. Therefore, there is a strong need for research on how users can better understand the capability of the conversational agents and mitigate negative emotions in error situations. In this study, we applied social strategies, "forewarning" and "apology", to conversational agent and investigated how these strategies affect users' perceptions of the agent in breakdown situations. For the study, we created a series of demo videos of a user interacting with a conversational agent. After watching the demo videos, the participants were asked to evaluate how they liked and trusted the agent through an online survey. A total of 104 respondents were analyzed and found to be contrary to our expectation based on the literature study. The result showed that forewarning gave a negative impression to the user, especially the reliability of the agent. Also, apology in a breakdown situation did not affect the users' perceptions. In the following in-depth interviews, participants explained that they perceived the smart speaker as a machine rather than a human-like object, and for this reason, the social strategies did not work. These results show that the social strategies should be applied according to the perceptions that user has toward agents.

An Artificial Intelligence Approach for Word Semantic Similarity Measure of Hindi Language

  • Younas, Farah;Nadir, Jumana;Usman, Muhammad;Khan, Muhammad Attique;Khan, Sajid Ali;Kadry, Seifedine;Nam, Yunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2049-2068
    • /
    • 2021
  • AI combined with NLP techniques has promoted the use of Virtual Assistants and have made people rely on them for many diverse uses. Conversational Agents are the most promising technique that assists computer users through their operation. An important challenge in developing Conversational Agents globally is transferring the groundbreaking expertise obtained in English to other languages. AI is making it possible to transfer this learning. There is a dire need to develop systems that understand secular languages. One such difficult language is Hindi, which is the fourth most spoken language in the world. Semantic similarity is an important part of Natural Language Processing, which involves applications such as ontology learning and information extraction, for developing conversational agents. Most of the research is concentrated on English and other European languages. This paper presents a Corpus-based word semantic similarity measure for Hindi. An experiment involving the translation of the English benchmark dataset to Hindi is performed, investigating the incorporation of the corpus, with human and machine similarity ratings. A significant correlation to the human intuition and the algorithm ratings has been calculated for analyzing the accuracy of the proposed similarity measures. The method can be adapted in various applications of word semantic similarity or module for any other language.

Effects of Conversational Agent's Self-Repair Strategy On User Experience - Focused on Task Criticality and Conversational Error (대화형 에이전트의 자기발화수정 전략이 사용자 경험에 미치는 영향 - 과업 중요도와 대화 오류 여부를 중심으로)

  • Kim, Hwanju;Kim, Jung-Yong;Kang, Hyunmin
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.251-260
    • /
    • 2022
  • Despite the development of technology and the increase in the spread of smart speakers, user satisfaction keeps decreasing due to conversational errors. This study aims to examine the effect of the self-repair strategy on user experience in the context of conversational agents of smart speakers. Scenarios were designed based on error situations, and participants were divided into two groups by task criticality. The results revealed that the agent's self-repair strategy has a negative effect on trust and perceived ease of use compared with performance without error. It also influenced adoption intention through interaction with task criticality. This study is significant in that it empirically investigated the effects of the self-repair strategy and the user experience factors related to the actual acceptance of the self-repair strategy.

A Study on Conversational AI Agent based on Continual Learning

  • Chae-Lim, Park;So-Yeop, Yoo;Ok-Ran, Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.27-38
    • /
    • 2023
  • In this paper, we propose a conversational AI agent based on continual learning that can continuously learn and grow with new data over time. A continual learning-based conversational AI agent consists of three main components: Task manager, User attribute extraction, and Auto-growing knowledge graph. When a task manager finds new data during a conversation with a user, it creates a new task with previously learned knowledge. The user attribute extraction model extracts the user's characteristics from the new task, and the auto-growing knowledge graph continuously learns the new external knowledge. Unlike the existing conversational AI agents that learned based on a limited dataset, our proposed method enables conversations based on continuous user attribute learning and knowledge learning. A conversational AI agent with continual learning technology can respond personally as conversations with users accumulate. And it can respond to new knowledge continuously. This paper validate the possibility of our proposed method through experiments on performance changes in dialogue generation models over time.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Study of deduction flow map on conversation toward the Embodied conversational agents in the Mobile Environment (모바일 상황에서 대화형 에이전트와 사용자의 대화 흐름도 도출 연구)

  • Choi, Yoo-Jung;Jo, Yoon-Ju;Park, Su-E
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.178-183
    • /
    • 2008
  • The goal of this study is finding flow-map in conversation what is going on user and embodied conversational agent by analysing that conversation. Specifically, this study not only find elements of conversation, but also draw out patterns of conversation can be exist for dialogue ability between user and Embodied conversational agent. To do this, we collect data through in-depth one to one interview, and then we analysis collected data to try to find out element of user-agent conversation based on qualitative research refer to the theory of conversation analytics and type of conversation. As a result, six flow map is deducted Especially, the irregular conversation is hard to find in human-human conversation, and the frequency is the most in data. In addition, when elements of interruption came out, be hostile to partner or correct the press conversation. This study can have positive effect to embodied conversation agent developer, user and service offerer because this study find the type of conversation through analysis that between embodied conversational agent and user.

  • PDF