• Title/Summary/Keyword: Voice learning

Search Result 264, Processing Time 0.03 seconds

Investigation of Timbre-related Music Feature Learning using Separated Vocal Signals (분리된 보컬을 활용한 음색기반 음악 특성 탐색 연구)

  • Lee, Seungjin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1024-1034
    • /
    • 2019
  • Preference for music is determined by a variety of factors, and identifying characteristics that reflect specific factors is important for music recommendations. In this paper, we propose a method to extract the singing voice related music features reflecting various musical characteristics by using a model learned for singer identification. The model can be trained using a music source containing a background accompaniment, but it may provide degraded singer identification performance. In order to mitigate this problem, this study performs a preliminary work to separate the background accompaniment, and creates a data set composed of separated vocals by using the proven model structure that appeared in SiSEC, Signal Separation and Evaluation Campaign. Finally, we use the separated vocals to discover the singing voice related music features that reflect the singer's voice. We compare the effects of source separation against existing methods that use music source without source separation.

A Design and Implementation of The Deep Learning-Based Senior Care Service Application Using AI Speaker

  • Mun Seop Yun;Sang Hyuk Yoon;Ki Won Lee;Se Hoon Kim;Min Woo Lee;Ho-Young Kwak;Won Joo Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.23-30
    • /
    • 2024
  • In this paper, we propose a deep learning-based personalized senior care service application. The proposed application uses Speech to Text technology to convert the user's speech into text and uses it as input to Autogen, an interactive multi-agent large-scale language model developed by Microsoft, for user convenience. Autogen uses data from previous conversations between the senior and ChatBot to understand the other user's intent and respond to the response, and then uses a back-end agent to create a wish list, a shared calendar, and a greeting message with the other user's voice through a deep learning model for voice cloning. Additionally, the application can perform home IoT services with SKT's AI speaker (NUGU). The proposed application is expected to contribute to future AI-based senior care technology.

Effects of Different Types of Chatbots on EFL Learners' Speaking Competence and Learner Perception (서로 다른 챗봇 유형이 한국 EFL 학습자의 말하기능력 및 학습자인식에 미치는 영향)

  • Kim, Na-Young
    • Cross-Cultural Studies
    • /
    • v.48
    • /
    • pp.223-252
    • /
    • 2017
  • This study explores effects of two types of chatbots - voice-based and text-based - on Korean EFL learners' speaking competence and learner perception. Participants were 80 freshmen students taking an English-speaking class at a university in Korea. They were divided into two experimental groups at random. During the sixteen-week experimental period, participants engaged in 10 chat sessions with the two different types of chatbots. To take a close examination of effects on the improvement of speaking competence, they took the TOEIC speaking test as pre- and post-tests. Structured questionnaire-based surveys were conducted before and after treatment to determine if there are changes in perception. Findings reveal two chatbots effectively contribute to improvement of speaking competence among EFL learners. Particularly, the voice-based chatbot was as effective as the text-based chatbot. An analysis of survey results indicates perception of chatbot-assisted language learning changed positively over time. In particular, most participants preferred voice-based chatbot over text-based chatbot. This study provides insight on the use of chatbots in EFL learning, suggesting that EFL teachers should integrate chatbot technology in their classrooms.

Virtual Reality based Situation Immersive English Dialogue Learning System (가상현실 기반 상황몰입형 영어 대화 학습 시스템)

  • Kim, Jin-Won;Park, Seung-Jin;Min, Ga-Young;Lee, Keon-Myung
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.245-251
    • /
    • 2017
  • This presents an English conversation training system with which learners train their conversation skills in English, which makes them converse with native speaker characters in a virtual reality environment with voice. The proposed system allows the learners to talk with multiple native speaker characters in varous scenarios in the virtual reality environment. It recongizes voices spoken by the learners and generates voices by a speech synthesis method. The interaction with characters in the virtual reality environment in voice makes the learners immerged in the conversation situations. The scoring system which evaluates the learner's pronunciation provides the positive feedback for the learners to get engaged in the learning context.

Interactive Adaptation of Fuzzy Neural Networks in Voice-Controlled Systems

  • Pulasinghe, Koliya;Watanabe, Keigo;Izumi, Kiyotaka;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.42.3-42
    • /
    • 2002
  • Fuzzy Neural Network (FNN) is a compulsory element in a voice-controlled machine due to its inherent capability of interpreting imprecise natural language commands. To control such a machine, user's perception of imprecise words is very important because the words' meaning is highly subjective. This paper presents a voice based controller centered on an adaptable FNN to capture the user's perception of imprecise words. Conversational interface of the machine facilitates the learning through interaction. The system consists of a dialog manager (DM), the conversational interface, a Knowledge base, which absorbs user's perception and acts as a replica of human understanding of imprecise words,...

  • PDF

Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people (언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법)

  • SeungKwon Lee;U-Jin Choe;Gwangil Jeon
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.17-24
    • /
    • 2022
  • With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.

Design and Implementation of Mobile Communication System for Hearing- impaired Person (청각 장애인을 위한 모바일 통화 시스템 설계 및 구현)

  • Yun, Dong-Hee;Kim, Young-Ung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.111-116
    • /
    • 2016
  • According to the Ministry of Science, ICT and Future Planning's survey of information gap, smartphone retention rate of disabled people stayed in one-third of non-disabled people, the situation is significantly less access to information for people with disabilities than non-disabled people. In this paper, we develop an application, CallHelper, that helps to be more convenient to use mobile voice calls to the auditory disabled people. CallHelper runs automatically when a call comes in, translates caller's voice to text output on the mobile screen, and displays the emotion reasoning from the caller's voice to visualize emoticons. It also saves voice, translated text, and emotion data that can be played back.

Voice Activity Detection based on DBN using the Likelihood Ratio (우도비를 이용한 DBN 기반의 음성 검출기)

  • Kim, S.K.;Lee, S.M.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.8 no.3
    • /
    • pp.145-150
    • /
    • 2014
  • In this paper, we propose a novel scheme to improve the performance of a voice activity detection(VAD) which is based on the deep belief networks(DBN) with the likelihood ratio(LR). The proposed algorithm applies the DBN learning method which is trained in order to minimize the probability of detection error instead of the conventional decision rule using geometric mean. Experimental results show that the proposed algorithm yields better results compared to the conventional VAD algorithm in various noise environments.

  • PDF

An interactive teachable agent system for EFL learners (대화형 Teachable Agent를 이용한 영어말하기학습 시스템)

  • Kyung A Lee;Sun-Bum Lim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.797-802
    • /
    • 2023
  • In an environment where English is a foreign language, English learners can use AI voice chatbots in English-speaking practice activities to enhance their speaking motivation, provide opportunities for communication practice, and improve their English speaking ability. In this study, we propose a teaching-style AI voice chatbot that can be easily utilized by lower elementary school students and enhance their learning. To apply the Teachable Agent system to language learning, which is an activity based on tense, context, and memory, we proposed a new method of TA by applying the Teachable Agent to reflect the learner's English pronunciation and level and generate the agent's answers according to the learner's errors and implemented a Teachable Agent AI chatbot prototype. We conducted usability evaluations with actual elementary English teachers and elementary school students to demonstrate learning effects. The results of this study can be applied to motivate students who are not interested in learning or elementary school students to voluntarily participate in learning through role-switching.

Research on Developing a Conversational AI Callbot Solution for Medical Counselling

  • Won Ro LEE;Jeong Hyon CHOI;Min Soo KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.9-13
    • /
    • 2023
  • In this study, we explored the potential of integrating interactive AI callbot technology into the medical consultation domain as part of a broader service development initiative. Aimed at enhancing patient satisfaction, the AI callbot was designed to efficiently address queries from hospitals' primary users, especially the elderly and those using phone services. By incorporating an AI-driven callbot into the hospital's customer service center, routine tasks such as appointment modifications and cancellations were efficiently managed by the AI Callbot Agent. On the other hand, tasks requiring more detailed attention or specialization were addressed by Human Agents, ensuring a balanced and collaborative approach. The deep learning model for voice recognition for this study was based on the Transformer model and fine-tuned to fit the medical field using a pre-trained model. Existing recording files were converted into learning data to perform SSL(self-supervised learning) Model was implemented. The ANN (Artificial neural network) neural network model was used to analyze voice signals and interpret them as text, and after actual application, the intent was enriched through reinforcement learning to continuously improve accuracy. In the case of TTS(Text To Speech), the Transformer model was applied to Text Analysis, Acoustic model, and Vocoder, and Google's Natural Language API was applied to recognize intent. As the research progresses, there are challenges to solve, such as interconnection issues between various EMR providers, problems with doctor's time slots, problems with two or more hospital appointments, and problems with patient use. However, there are specialized problems that are easy to make reservations. Implementation of the callbot service in hospitals appears to be applicable immediately.