• Title/Summary/Keyword: conversational agent

Search Result 57, Processing Time 0.022 seconds

Exploration of User Experience Research Method with Big Data Analysis : Focusing on the Online Review Analysis of Echo (빅데이터 분석을 활용한 사용자 경험 평가 방법론 탐색 : 아마존 에코에 대한 온라인 리뷰 분석을 중심으로)

  • Hwang, Hae Jeong;Shim, Hye Rin;Choi, Junho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.517-528
    • /
    • 2016
  • This study attempted to explore and examine a new user experience (UX) research method for IoT products which are becoming widely used but lack practical user research. While user experience research has been traditionally opted for survey or observation methods, this paper utilized big data analysis method for user online reviews on an intelligent agent IoT product, Amazon's Echo. The results of topic modelling analysis extracted user experience elements such as features, conversational interaction, and updates. In addition, regression analysis showed that the topic of updates was the most influential determinant of user satisfaction. The main implication of this study is the new introduction of big data analysis method into the user experience research for the intelligent agent IoT products.

A mixed-initiative conversational agent for ubiquitous home environments (유비쿼터스 가정환경을 위한 상호주도형 대화 에이전트)

  • Song In-Jee;Hong Jin-Hyuk;Cho Sung-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.834-839
    • /
    • 2005
  • When a great variety of services become available to user through the broadband convergence network in the ubiquitous home environment, an intelligent agent is required to deal with the complexity of services and perceive intension of a user. Different from the old-fashioned command-based user interface for selecting services, conversation enables flexible and rich interactions between human and agents, but diverse expressions of the user's background and context make conversation hard to implement by using either user-initiative or system-initiative methods. To deal with the ambiguity of diverse expressions between user and agents, we have to apply hierarchial bayesian networks for the mixed initiative conversation. Missing information from user's query is analyzed by hierarchial bayesian networks to inference the user's intension so that can be collected through the agent's query. We have implemented this approach in ubiquitous home environment by implementing simulation program.

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Implementation of Chatbot Models for Coding Education (코딩 교육을 위한 챗봇 모델 구현)

  • Chae-eun, Ahn;Hyun-in, Jeon;Hee-Il, Hahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.29-35
    • /
    • 2023
  • In this paper, we propose a SW-EDU bot, a chatbot learning model for coding education by using a chatbot system. The same scenario-based models are created on the basis of Dialogflow and Kakao i Open Builder, which are representative chatbot builders. And then a SW-EDU bot is designed and implemented by selecting the builder more appropriate to our purpose. The implemented chatbot system aims to learn effective learning methods while encouraging self-direction of users by providing learning type selection, concept learning, and problem solving by difficulty level. In order to compare the usability of chatbot builders, five indicators are selected, and based on these, a builder with a comparative advantage is selected, and SW-EDU bot is implemented based on these. Through usability evaluation, we analyze the feasibility of SW-EDU bot as a learning support tool and confirm the possibility of using it as a new coding education learning tool.

Research on Developing a Conversational AI Callbot Solution for Medical Counselling

  • Won Ro LEE;Jeong Hyon CHOI;Min Soo KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.9-13
    • /
    • 2023
  • In this study, we explored the potential of integrating interactive AI callbot technology into the medical consultation domain as part of a broader service development initiative. Aimed at enhancing patient satisfaction, the AI callbot was designed to efficiently address queries from hospitals' primary users, especially the elderly and those using phone services. By incorporating an AI-driven callbot into the hospital's customer service center, routine tasks such as appointment modifications and cancellations were efficiently managed by the AI Callbot Agent. On the other hand, tasks requiring more detailed attention or specialization were addressed by Human Agents, ensuring a balanced and collaborative approach. The deep learning model for voice recognition for this study was based on the Transformer model and fine-tuned to fit the medical field using a pre-trained model. Existing recording files were converted into learning data to perform SSL(self-supervised learning) Model was implemented. The ANN (Artificial neural network) neural network model was used to analyze voice signals and interpret them as text, and after actual application, the intent was enriched through reinforcement learning to continuously improve accuracy. In the case of TTS(Text To Speech), the Transformer model was applied to Text Analysis, Acoustic model, and Vocoder, and Google's Natural Language API was applied to recognize intent. As the research progresses, there are challenges to solve, such as interconnection issues between various EMR providers, problems with doctor's time slots, problems with two or more hospital appointments, and problems with patient use. However, there are specialized problems that are easy to make reservations. Implementation of the callbot service in hospitals appears to be applicable immediately.

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.

The Importance of Social Intimacy as a Sufficient Condition for Anthropomorphism and Positive User Experience (의인화와 긍정적인 사용자 경험의 충분조건으로서 사회적 친밀감의 중요성)

  • Lee, Da-Young;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.15-32
    • /
    • 2022
  • This study seeks to clarify the mechanisms of anthropomorphism and positive user experience. This study adopts the "computers are social actors" (CASA) paradigm to verify the causal relationship between social response and anthropomorphism and correctly explicate this paradigm. The intimacy-forming and anthropomorphizing effects of deep self-disclosure in interpersonal relationships were replicated in relationships between humans and conversational agents to induce both social response and anthropomorphism. Then, the mediating effect of intimacy on the anthropomorphizing effect of deep self-disclosure was explored with psychological models that revealed the causal relationships between social connections, including intimacy and anthropomorphism. Furthermore, we explored how intimacy and anthropomorphism trigger positive user experiences. The results demonstrated that the deeper the self-disclosure depth was, the more intimate and humanly the agent was perceived and the more positive the user experience was. In addition, the effect of self-disclosure depth on anthropomorphism and positive user experience was completely mediated by intimacy. This means that when using a computer with interpersonal characteristics, people anthropomorphize it and have a positive experience because people react socially to objects with social cues. This study bridges the gap between the CASA paradigm and anthropomorphism research, suggesting the possibility of psychological explanations for the principle of human-computer interactions. In addition, it explicates the mechanism of anthropomorphism and positive user experience, emphasizing the importance of social response-that is, intimacy.