• Title/Summary/Keyword: 인공지능 스피커

Search Result 58, Processing Time 0.072 seconds

Assistant Robot with Google Assistant (구글 어시스턴스를 탑재한 비서로봇)

  • Cha-Hun Park;Jae-Hwan Kim;Ho-Beom Kim;Jin-Yeong Kim;Jeong-Mi Son;Jae-Min Jeong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.419-420
    • /
    • 2023
  • 최근 인공지능 기술과 로봇 기술의 발전으로 인해 비서 로봇을 만드는 기술적인 가능성이 커지면서 업무 자동화를 위해 많은 기업에서 도입하고 있다. 특히 인구 고령화가 진행되면서 노동력 부족이 심각한 문제로 대두되고 있다. 현재 비서로봇은 정형화된 대화는 잘 처리하지만 비정형화된 대화에 대해서는 한계가 있다. 본 논문은 앞선 문제를 해결하기 위해 비정형화된 대화도 가능하면서 사용자가 원하는 행동을 실행할 수 있는 보편화된 비서 로봇을 선보인다. 음성인식 모듈과 구글 어시스턴트를 활용하여 마이크를 통해 비서 로봇에게 스케줄 관리, 날씨 등을 질문하고, 스피커를 통해 대답을 듣는 등 비정형화된 의사소통을 할 수 있으며, 비서 로봇에게 원하는 행동을 지시하여 행동을 구현시킬 수 있는 비서로봇을 제안한다.

  • PDF

Positioning of Smart Speakers by Applying Text Mining to Consumer Reviews: Focusing on Artificial Intelligence Factors (텍스트 마이닝을 활용한 스마트 스피커 제품의 포지셔닝: 인공지능 속성을 중심으로)

  • Lee, Jung Hyeon;Seon, Hyung Joo;Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.21 no.1
    • /
    • pp.197-210
    • /
    • 2020
  • The smart speaker includes an AI assistant function in the existing portable speaker, which enables a person to give various commands using a voice and provides various offline services associated with control of a connected device. The speed of domestic distribution is also increasing, and the functions and linked services available through smart speakers are expanding to shopping and food orders. Through text mining-based customer review analysis, there have been many proposals for identifying the impact on customer attitudes, sentiment analysis, and product evaluation of product functions and attributes. Emotional investigation has been performed by extracting words corresponding to characteristics or features from product reviews and analyzing the impact on assessment. After obtaining the topic from the review, the effect on the evaluation was analyzed. And the market competition of similar products was visualized. Also, a study was conducted to analyze the reviews of smart speaker users through text mining and to identify the main attributes, emotional sensitivity analysis, and the effects of artificial intelligence attributes on product satisfaction. The purpose of this study is to collect blog posts about the user's experiences of smart speakers released in Korea and to analyze the attitudes of customers according to their attributes. Through this, customers' attitudes can be identified and visualized by each smart speaker product, and the positioning map of the product was derived based on customer recognition of smart speaker products by collecting the information identified by each property.

A Ghost in the Shell? Influences of AI Features on Product Evaluations of Smart Speakers with Customer Reviews (A Ghost in the Shell? 고객 리뷰를 통한 스마트 스피커의 인공지능 속성이 평가에 미치는 영향 연구)

  • Lee, Hong Joo
    • Journal of Information Technology Services
    • /
    • v.17 no.2
    • /
    • pp.191-205
    • /
    • 2018
  • With the advancement of artificial intelligence (AI) techniques, many consumer products have adopted AI features for providing proactive and personalized services to customers. One of the most prominent products featuring AI techniques is a smart speaker. The fundamental of smart speaker is a portable wireless Internet connecting speaker which already have existed in a consumer market. By applying AI techniques, smart speakers can recognize human voices and communicate with them. In addition, they can control other connecting devices and provide offline services. The goal of this study is to identify the impact of AI techniques for customer rating to the products. We compared customer reviews of other portable speakers without AI features and those of a smart speaker. Amazon echo is used for a smart speaker and JBL Flip 4 Bluetooth Speaker and Ultimate Ears BOOM 2 Panther Limited Edition are used for the comparison. These products are in the same price range ($50~100) and selected as featured products in Amazon.com. All reviews for the products were collected and common words for all products and unique words of the smart speaker were identified. Information gain values were calculated to identify the influences of words to be rated as positive or negative. Positive and negative words in all the products or in Amazon echo were identified, too. Topic modeling was applied to the customer reviews on Amazon echo and the importance of each topic were measured by summating information gain values of each topic. This study provides a way of identifying customer responses on the AI feature and measuring the importance of the feature among diverse features of the products.

Deep Learning-Based User Emergency Event Detection Algorithms Fusing Vision, Audio, Activity and Dust Sensors (영상, 음성, 활동, 먼지 센서를 융합한 딥러닝 기반 사용자 이상 징후 탐지 알고리즘)

  • Jung, Ju-ho;Lee, Do-hyun;Kim, Seong-su;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.109-118
    • /
    • 2020
  • Recently, people are spending a lot of time inside their homes because of various diseases. It is difficult to ask others for help in the case of a single-person household that is injured in the house or infected with a disease and needs help from others. In this study, an algorithm is proposed to detect emergency event, which are situations in which single-person households need help from others, such as injuries or disease infections, in their homes. It proposes vision pattern detection algorithms using home CCTVs, audio pattern detection algorithms using artificial intelligence speakers, activity pattern detection algorithms using acceleration sensors in smartphones, and dust pattern detection algorithms using air purifiers. However, if it is difficult to use due to security issues of home CCTVs, it proposes a fusion method combining audio, activity and dust pattern sensors. Each algorithm collected data through YouTube and experiments to measure accuracy.

Customer Attitude to Artificial Intelligence Features: Exploratory Study on Customer Reviews of AI Speakers (인공지능 속성에 대한 고객 태도 변화: AI 스피커 고객 리뷰 분석을 통한 탐색적 연구)

  • Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.20 no.2
    • /
    • pp.25-42
    • /
    • 2019
  • AI speakers which are wireless speakers with smart features have released from many manufacturers and adopted by many customers. Though smart features including voice recognition, controlling connected devices and providing information are embedded in many mobile phones, AI speakers are sitting in home and has a role of the central en-tertainment and information provider. Many surveys have investigated the important factors to adopt AI speakers and influ-encing factors on satisfaction. Though most surveys on AI speakers are cross sectional, we can track customer attitude toward AI speakers longitudinally by analyzing customer reviews on AI speakers. However, there is not much research on the change of customer attitude toward AI speaker. Therefore, in this study, we try to grasp how the attitude of AI speaker changes with time by applying text mining-based analysis. We collected the customer reviews on Amazon Echo which has the highest share of AI speakers in the global market from Amazon.com. Since Amazon Echo already have two generations, we can analyze the characteristics of reviews and compare the attitude ac-cording to the adoption time. We identified all sub topics of customer reviews and specified the topics for smart features. And we analyzed how the share of topics varied with time and analyzed diverse meta data for comparisons. The proportions of the topics for general satisfaction and satisfaction on music were increasing while the proportions of the topics for music quality, speakers and wireless speakers were decreasing over time. Though the proportions of topics for smart fea-tures were similar according to time, the share of the topics in positive reviews and importance metrics were reduced in the 2nd generation of Amazon Echo. Even though smart features were mentioned similarly in the reviews, the influential effect on satisfac-tion were reduced over time and especially in the 2nd generation of Amazon Echo.

Artificial intelligence wearable platform that supports the life cycle of the visually impaired (시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼)

  • Park, Siwoong;Kim, Jeung Eun;Kang, Hyun Seo;Park, Hyoung Jun
    • Journal of Platform Technology
    • /
    • v.8 no.4
    • /
    • pp.20-28
    • /
    • 2020
  • In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

  • PDF

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

A Multi-speaker Speech Synthesis System Using X-vector (x-vector를 이용한 다화자 음성합성 시스템)

  • Jo, Min Su;Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.675-681
    • /
    • 2021
  • With the recent growth of the AI speaker market, the demand for speech synthesis technology that enables natural conversation with users is increasing. Therefore, there is a need for a multi-speaker speech synthesis system that can generate voices of various tones. In order to synthesize natural speech, it is required to train with a large-capacity. high-quality speech DB. However, it is very difficult in terms of recording time and cost to collect a high-quality, large-capacity speech database uttered by many speakers. Therefore, it is necessary to train the speech synthesis system using the speech DB of a very large number of speakers with a small amount of training data for each speaker, and a technique for naturally expressing the tone and rhyme of multiple speakers is required. In this paper, we propose a technology for constructing a speaker encoder by applying the deep learning-based x-vector technique used in speaker recognition technology, and synthesizing a new speaker's tone with a small amount of data through the speaker encoder. In the multi-speaker speech synthesis system, the module for synthesizing mel-spectrogram from input text is composed of Tacotron2, and the vocoder generating synthesized speech consists of WaveNet with mixture of logistic distributions applied. The x-vector extracted from the trained speaker embedding neural networks is added to Tacotron2 as an input to express the desired speaker's tone.