• Title/Summary/Keyword: Speech-Recognition

Search Result 2,045, Processing Time 0.037 seconds

Consideration for cognitive effects in smart environments for effective UXD(User eXperience Design) (스마트환경의 효과적인 UXD를 위한 인지작용 고찰)

  • Lee, Chang Wook;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.397-405
    • /
    • 2013
  • The development of the technology of the 21st century, wireless Internet technology development in smart environments, was rapidly settled. In such an environment, the user is faced with many smart devices and smart content. This study is the analysis of the smart environment and smart devices, and user-to-user cognitive out about the effects reported. Cognitive effects observed behavior, technology, and user-centered system design, and plays a very important role to play in educating the users. And theoretical consideration about the UX (User eXperience) and UXD (User eXperience Design), by case analysis on the technical aspects of 'effective' visual aspect of interoperation aspects (interaction), and the cognitive effects of UXD (User eXperience Design) examined. As a result, on the visual aspects of the user experience based on the design that can be used to know, and be sound or through interaction with the user of the machine-to-machine interaction (and interaction) that must be provided, such as location-based or speech recognition technology will help you through the convenience of the user. Through this research, the smart environment and helping act of understanding, effective UXD (User eXperience Design) to take advantage of to help.

Performance Improvement Methods of a Spoken Chatting System Using SVM (SVM을 이용한 음성채팅시스템의 성능 향상 방법)

  • Ahn, HyeokJu;Lee, SungHee;Song, YeongKil;Kim, HarkSoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.6
    • /
    • pp.261-268
    • /
    • 2015
  • In spoken chatting systems, users'spoken queries are converted to text queries using automatic speech recognition (ASR) engines. If the top-1 results of the ASR engines are incorrect, these errors are propagated to the spoken chatting systems. To improve the top-1 accuracies of ASR engines, we propose a post-processing model to rearrange the top-n outputs of ASR engines using a ranking support vector machine (RankSVM). On the other hand, a number of chatting sentences are needed to train chatting systems. If new chatting sentences are not frequently added to training data, responses of the chatting systems will be old-fashioned soon. To resolve this problem, we propose a data collection model to automatically select chatting sentences from TV and movie scenarios using a support vector machine (SVM). In the experiments, the post-processing model showed a higher precision of 4.4% and a higher recall rate of 6.4% compared to the baseline model (without post-processing). Then, the data collection model showed the high precision of 98.95% and the recall rate of 57.14%.

Knowledge Transfer Using User-Generated Data within Real-Time Cloud Services

  • Zhang, Jing;Pan, Jianhan;Cai, Zhicheng;Li, Min;Cui, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.77-92
    • /
    • 2020
  • When automatic speech recognition (ASR) is provided as a cloud service, it is easy to collect voice and application domain data from users. Harnessing these data will facilitate the provision of more personalized services. In this paper, we demonstrate our transfer learning-based knowledge service that built with the user-generated data collected through our novel system that deliveries personalized ASR service. First, we discuss the motivation, challenges, and prospects of building up such a knowledge-based service-oriented system. Second, we present a Quadruple Transfer Learning (QTL) method that can learn a classification model from a source domain and transfer it to a target domain. Third, we provide an overview architecture of our novel system that collects voice data from mobile users, labels the data via crowdsourcing, utilises these collected user-generated data to train different machine learning models, and delivers the personalised real-time cloud services. Finally, we use the E-Book data collected from our system to train classification models and apply them in the smart TV domain, and the experimental results show that our QTL method is effective in two classification tasks, which confirms that the knowledge transfer provides a value-added service for the upper-layer mobile applications in different domains.

An Analysis on Phone-Like Units for Korean Continuous Speech Recognition in Noisy Environments (잡음환경하의 연속 음성인식을 위한 유사음소단위 분석)

  • Shen Guang-Hu;Lim Soo-Ho;Seo Jun-Bae;Kim Joo-Gon;Jung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.123-126
    • /
    • 2004
  • 본 논문은 잡음환경 하에서의 효율적인 문맥의존 음향 모델 구성에 대한 기초연구로서 잡음환경 하에서의 유사 음소단위 수에 따른 연속 음성인식 성능을 비교, 평가한 결과에 대한 보고이다. 기존의 연구[1,2]로부터 연속음성 인식의 경우 문맥종속모델은 변이음을 고려한 39유사음소를 이용한 경우가 48유사음소를 이용하는 것보다 더 좋은 인식성능을 나타냄을 알 수 있었다. 이 연구 결과를 바탕으로 본 연구에서는 잡음환경에서도 효율적인 문맥 의존 음향모델을 구성하기 위한 기초 연구를 수행하였다. 다양한 잡음환경을 고려하기 위해 White, Pink, LAB 잡음을 신호 대 잡음비(Signal to Noise Ratio) 5dB, 10dB, 15dB 레벨로 음성에 부가한 후 각 유사음소단위 수에 따른 연속음성인식 실험을 수행하였다. 그 결과, 39유사음소를 이용한 경우가 48유사음소를 이용한 경우보다 clear 환경인 경우에 약 $7\%$$17\%$ 향상된 단어인식률과 문장 인식률을 얻을 수 있었으며, 각 잡음환경에서도 39유사음소를 이용한 경우가 48유사음소를 이용한 경우보다 평균 적으로 $17\%$$28\%$ 향상된 단어인식률과 문장인식률을 얻을 수 있어 39유사음소 단위가 한국어 연속음성인식에 더 적합하고 잡음환경에서도 유효함을 확인할 수 있었다.

  • PDF

Effects of Articulator-distance and Tense in Phonological Awareness in Korean: The case of Korean Infants and Toddlers (한국어 음운인식에서의 조음거리와 긴장성 자질의 특성 연구: 영·유아를 중심으로)

  • Kim, Choong-Myung
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.8
    • /
    • pp.424-433
    • /
    • 2015
  • This study tried to investigate the differences between auditory preferences for a discrimination study of minimal pairs with the different onset and the same nucleus of a syllable on the basis of articulator-distance in case of Korean infants and toddlers. As a result we found a main effect for articulator-distance and age but not an effect according to the types of phonation especially in terms of tense. Former results are line with the previous studies having reported the order of consonants acquisition based on the places of articulation suggesting that more sensitive responses for the contiguous and different phonemes may lead earlier acquisition for the same place of articulation of the speech sounds. Specifically, bilabial soudns are followed by alveolar and palatal sounds in order. The latter results also showed that tense consonants got a high rate of recognition beside lax consonants according to the age and sex.

Determinants of Safety and Satisfaction with In-Vehicle Voice Interaction : With a Focus of Agent Persona and UX Components (자동차 음성인식 인터랙션의 안전감과 만족도 인식 영향 요인 : 에이전트 퍼소나와 사용자 경험 속성을 중심으로)

  • Kim, Ji-hyun;Lee, Ka-hyun;Choi, Jun-ho
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.573-585
    • /
    • 2018
  • Services for navigation and entertainment through AI-based voice user interface devices are becoming popular in the connected car system. Given the classification of VUI agent developers as IT companies and automakers, this study explores attributes of agent persona and user experience that impact the driver's perceived safety and satisfaction. Participants of a car simulator experiment performed entertainment and navigation tasks, and evaluated the perceived safety and satisfaction. Results of regression analysis showed that credibility of the agent developer, warmth and attractiveness of agent persona, and efficiency and care of the UX dimension showed significant impact on the perceived safety. The determinants of perceived satisfaction were unity of auto-agent makers and gender as predisposing factors, distance in the agent persona, and convenience, efficiency, ease of use, and care in the UX dimension. The contributions of this study lie in the discovery of the factors required for developing conversational VUI into the autonomous driving environment.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.

Contextual In-Video Advertising Using Situation Information (상황 정보를 활용한 동영상 문맥 광고)

  • Yi, Bong-Jun;Woo, Hyun-Wook;Lee, Jung-Tae;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.3036-3044
    • /
    • 2010
  • With the rapid growth of video data service, demand to provide advertisements or additional information with regard to a particular video scene is increasing. However, the direct use of automated visual analysis or speech recognition on videos virtually has limitations with current level of technology; the metadata of video such as title, category information, or summary does not reflect the content of continuously changing scenes. This work presents a new video contextual advertising system that serves relevant advertisements on a given scene by leveraging the scene's situation information inferred from video scripts. Experimental results show that the use of situation information extracted from scripts leads to better performance and display of more relevant advertisements to the user.

A Study on Classification of Waveforms Using Manifold Embedding Based on Commute Time (컴뮤트 타임 기반의 다양체 임베딩을 이용한 파형 신호 인식에 관한 연구)

  • Hahn, Hee-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.148-155
    • /
    • 2014
  • In this paper a commute time embedding is implemented by organizing patches according to the graph-based metric, and its properties are investigated via changing the number of nodes on the graph.. It is shown that manifold embedding methods generate the intrinsic geometric structures when waveforms such as speech or music instrumental sound signals are embedded on the low dimensional Euclidean space. Basically manifold embedding algorithms only project the training samples on the graph into an embedding subspace but can not generalize the learning results to test samples. They are very effective for data clustering but are not appropriate for classification or recognition. In this paper a commute time guided transform is adopted to enhance the generalization ability and its performance is analyzed by applying it to the classification of 6 kinds of music instrumental sounds.

The Prosodic Changes of Korean English Learners in Robot Assisted Learning (로봇보조언어교육을 통한 초등 영어 학습자의 운율 변화)

  • In, Jiyoung;Han, JeongHye
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.4
    • /
    • pp.323-332
    • /
    • 2016
  • A robot's recognition and diagnosis of pronunciation and its speech are the most important interactions in RALL(Robot Assisted Language Learning). This study is to verify the effectiveness of robot TTS(Text to Sound) technology in assisting Korean English language learners to acquire a native-like accent by correcting the prosodic errors they commonly make. The child English language learners' F0 range and speaking rate in the 4th grade, a prosodic variable, will be measured and analyzed for any changes in accent. We compare whether robot with the currently available TTS technology appeared to be effective for the 4th graders and 1st graders who were not under the formal English learning with native speaker from the acoustic phonetic viewpoint. Two groups by repeating TTS of RALL responded to the speaking rate rather than F0 range.