• Title/Summary/Keyword: Artificial Intelligence Speaker

Search Result 45, Processing Time 0.024 seconds

Ethical Dilemma on Educational Usage of A.I. Speaker (인공지능 스피커의 교육적 활용에서의 윤리적 딜레마)

  • Han, Jeonghye;Kim, Jong-Wook
    • Journal of Creative Information Culture
    • /
    • v.7 no.1
    • /
    • pp.11-19
    • /
    • 2021
  • With the announcement of the AI national strategy, various policies for AI education are being proposed, and AI convergence education for teachers is actively being promoted. In addition, AI speakers are being sold and distributed to each home, and field studies of educational use of AI speakers have just started. This study examines the controversial problems that AI speakers may cause in AI ethics, and attempts to derive an ethical dilemma that may arise when AI speakers are used at home or at school. This dilemma can be used in the moral competence test (MCT), which measures the level of moral judgment for each group of artificial intelligence speakers.

State Visualization Design of AI Speakers using Color Field Painting (색면추상 기법을 통한 AI 스피커의 상태 시각화 디자인 연구)

  • Hong, Seung Yoon;Choe, Jong-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.572-580
    • /
    • 2020
  • Recently released AI speakers show a pattern of interacting with the user by mainly with voice and simultaneously displaying simple and formal visual feedback through status LED light. This is due to the limitations of the product characteristics of the speaker, which makes it difficult to interact variously, and even such visual feedback is not standardized for each product, and thus does not give a consistent user experience. By maximizing the visual elements that can be expressed through color and abstract movement to assist voice feedback, the product can provide the user with an extended experience that includes not only functional satisfaction but also emotional satisfaction. In this study, after analyzing the interaction methods of the existing AI speakers, we examined the theory of color communication in order to expand the visual feedback effect, and examined the meaning and expression technique of Color Field Painting, an art genre that maximizes the emotional experience by using only color. Through this, the AI speaker's visual communication function was expanded by designing a way to feedback communication status using LED light.

On-Line Blind Channel Normalization for Noise-Robust Speech Recognition

  • Jung, Ho-Young
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.143-151
    • /
    • 2012
  • A new data-driven method for the design of a blind modulation frequency filter that suppresses the slow-varying noise components is proposed. The proposed method is based on the temporal local decorrelation of the feature vector sequence, and is done on an utterance-by-utterance basis. Although the conventional modulation frequency filtering approaches the same form regardless of the task and environment conditions, the proposed method can provide an adaptive modulation frequency filter that outperforms conventional methods for each utterance. In addition, the method ultimately performs channel normalization in a feature domain with applications to log-spectral parameters. The performance was evaluated by speaker-independent isolated-word recognition experiments under additive noise environments. The proposed method achieved outstanding improvement for speech recognition in environments with significant noise and was also effective in a range of feature representations.

  • PDF

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Artificial intelligence wearable platform that supports the life cycle of the visually impaired (시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼)

  • Park, Siwoong;Kim, Jeung Eun;Kang, Hyun Seo;Park, Hyoung Jun
    • Journal of Platform Technology
    • /
    • v.8 no.4
    • /
    • pp.20-28
    • /
    • 2020
  • In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

  • PDF

End-to-end speech recognition models using limited training data (제한된 학습 데이터를 사용하는 End-to-End 음성 인식 모델)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.63-71
    • /
    • 2020
  • Speech recognition is one of the areas actively commercialized using deep learning and machine learning techniques. However, the majority of speech recognition systems on the market are developed on data with limited diversity of speakers and tend to perform well on typical adult speakers only. This is because most of the speech recognition models are generally learned using a speech database obtained from adult males and females. This tends to cause problems in recognizing the speech of the elderly, children and people with dialects well. To solve these problems, it may be necessary to retain big database or to collect a data for applying a speaker adaptation. However, this paper proposes that a new end-to-end speech recognition method consists of an acoustic augmented recurrent encoder and a transformer decoder with linguistic prediction. The proposed method can bring about the reliable performance of acoustic and language models in limited data conditions. The proposed method was evaluated to recognize Korean elderly and children speech with limited amount of training data and showed the better performance compared of a conventional method.

Perception of Virtual Assistant and Smart Speaker: Semantic Network Analysis and Sentiment Analysis (가상 비서와 스마트 스피커에 대한 인식과 기대: 의미 연결망 분석과 감성분석을 중심으로)

  • Park, Hohyun;Kim, Jang Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.213-216
    • /
    • 2018
  • As the advantages of smart devices based on artificial intelligence and voice recognition become more prominent, Virtual Assistant is gaining popularity. Virtual Assistant provides a user experience through smart speakers and is valued as the most user friendly IoT device by consumers. The purpose of this study is to investigate whether there are differences in people's perception of the key virtual assistant brand voice recognition. We collected tweets that included six keyword form three companies that provide Virtual Assistant services. The authors conducted semantic network analysis for the collected datasets and analyzed the feelings of people through sentiment analysis. The result shows that many people have a different perception and mainly about the functions and services provided by the Virtual Assistant and the expectation and usability of the services. Also, people responded positively to most keywords.

  • PDF

Expectation and Expectation Gap towards intelligent properties of AI-based Conversational Agent (인공지능 대화형 에이전트의 지능적 속성에 대한 기대와 기대 격차)

  • Park, Hyunah;Tae, Moonyoung;Huh, Youngjin;Lee, Joonhwan
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.15-22
    • /
    • 2019
  • The purpose of this study is to investigate the users' expectation and expectation gap about the attributes of smart speaker as an intelligent agent, ie autonomy, sociality, responsiveness, activeness, time continuity, goal orientation. To this end, semi-structured interviews were conducted for smart speaker users and analyzed based on ground theory. Result has shown that people have huge expectation gap about the sociality and human-likeness of smart speakers, due to limitations in technology. The responsiveness of smart speakers was found to have positive expectation gap. For the memory of time-sequential information, there was an ambivalent expectation gap depending on the degree of information sensitivity and presentation method. We also found that there was a low expectation level for autonomous aspects of smart speakers. In addition, proactive aspects were preferred only when appropriate for the context. This study presents implications for designing a way to interact with smart speakers and managing expectations.

Artificial Intelligence Microphone Utilization Model in Digital Education Environment (디지털 교육 환경에서의 인공 지능 마이크 활용 모델)

  • Nam, Ki-Bok;Park, Koo-Rack;Kim, Jae-Woong;Lee, Jun-Yeol;Kim, Dong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.17-18
    • /
    • 2019
  • 최근 4차 산업혁명의 핵심 분야 중 하나인 인공지능에 대한 많은 연구가 이루어지고 있다. 많은 기업들이 인공지능 스피커와 같은 제품을 출시하고 있으나 대부분 비서 역할만을 할 수 있도록 구성된 제품이 대부분이다. 그러나 학교와 같이 많은 사람이 존재하는 경우 시끄러운 환경에서 사용되고 있는 인공지능 스피커는 명령 인식이 제대로 되지 않아 실용도가 저하되는 단점을 가지고 있으며, 현재 인공지능 스피커는 단순한 질의응답 수준의 응대만 가능하여 다소 부족한 부분이 있다. 또한 인공지능의 급속한 발전으로 인공지능 스피커가 아닌 전자제품에 인공지능 비서 기능이 탑재된 제품도 새롭게 출시되어 인공지능 스피커가 필요 없을 수도 있기에, 본 논문에서는 학교와 같은 주변의 소음이 많이 발생하는 교육 환경에서도 소통이 가능한 인공지능 마이크를 활용할 수 있는 모델을 제안한다.

  • PDF

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.