• Title/Summary/Keyword: text-to-speech

Search Result 492, Processing Time 0.03 seconds

Improved Character-Based Neural Network for POS Tagging on Morphologically Rich Languages

  • Samat Ali;Alim Murat
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.355-369
    • /
    • 2023
  • Since the widespread adoption of deep-learning and related distributed representation, there have been substantial advancements in part-of-speech (POS) tagging for many languages. When training word representations, morphology and shape are typically ignored, as these representations rely primarily on collecting syntactic and semantic aspects of words. However, for tasks like POS tagging, notably in morphologically rich and resource-limited language environments, the intra-word information is essential. In this study, we introduce a deep neural network (DNN) for POS tagging that learns character-level word representations and combines them with general word representations. Using the proposed approach and omitting hand-crafted features, we achieve 90.47%, 80.16%, and 79.32% accuracy on our own dataset for three morphologically rich languages: Uyghur, Uzbek, and Kyrgyz. The experimental results reveal that the presented character-based strategy greatly improves POS tagging performance for several morphologically rich languages (MRL) where character information is significant. Furthermore, when compared to the previously reported state-of-the-art POS tagging results for Turkish on the METU Turkish Treebank dataset, the proposed approach improved on the prior work slightly. As a result, the experimental results indicate that character-based representations outperform word-level representations for MRL performance. Our technique is also robust towards the-out-of-vocabulary issues and performs better on manually edited text.

Kubernetes-based Framework for Improving Traffic Light Recognition Performance: Convergence Vision AI System based on YOLOv5 and C-RNN with Visual Attention (신호등 인식 성능 향상을 위한 쿠버네티스 기반의 프레임워크: YOLOv5와 Visual Attention을 적용한 C-RNN의 융합 Vision AI 시스템)

  • Cho, Hyoung-Seo;Lee, Min-Jung;Han, Yeon-Jee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.851-853
    • /
    • 2022
  • 고령화로 인해 65세 이상 운전자가 급증하며 고령운전자의 교통사고 비율이 증가함에 따라 시급한 사회 문제로 떠오르고 있다. 이에 본 연구에서는 객체 검출, 인식 모델을 결합하고 신호등을 인식하여 Text-To-Speech(TTS)로 알리는 쿠버네티스 기반의 프레임워크를 제안한다. 객체 검출 단계에서는 YOLOv5 모델들의 성능을 비교하여 활용하였으며 객체 인식 단계에서는 C-RNN 기반의 attention-OCR 모델을 활용하였다. 이는 신호등의 내부 LED 영역이 아닌 이미지 전체를 인식하는 방식으로 오탐지 요소를 낮춰 인식률을 높였다. 결과적으로 1,628장의 테스트 데이터에서 accuracy 0.997, F1-score 0.991의 성능 평가를 얻어 제안한 프레임워크의 타당성을 입증하였다. 본 연구는 후속 연구에서 특정 도메인에 딥러닝 모델을 한정하지 않고 다양한 분야의 모델을 접목할 수 있도록 하며 고령 운전자 및 신호 위반으로 인한 교통사고 문제를 예방할 수 있다.

Ship s Maneuvering and Winch Control System with Voice Instruction Based Learning (음성지시에 의한 선박 조종 및 윈치 제어 시스템)

  • Seo, Ki-Yeol;Park, Gyei-Kark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.517-523
    • /
    • 2002
  • In this paper, we propose system that apply VIBL method to add speech recognition to LIBL method based on human s studying method to use natural language to steering system of ship, MERCS and winch appliances and use VIBL method to alternate process that linguistic instruction such as officer s steering instruction is achieved via ableman and control steering gear, MERCS and winch appliances. By specific method of study, ableman s suitable steering manufacturing model embodies intelligent steering gear controlling system that embody and language direction base studying method to present proper meaning element and evaluation rule to steering system of ship apply and respond more efficiently on voice instruction of commander using fuzzy inference rule. Also we embody system that recognize voice direction of commander and control MERCS and winch appliances. We embodied steering manufacturing model based on ableman s experience and presented rudder angle for intelligent steering system, compass bearing arrival time, evaluation rule to propose meaning element of stationary state and correct steerman manufacturing model rule using technique to recognize voice instruction of commander and change to text and fuzzy inference. Also we apply VIBL method to speech recognition ship control simulator and confirmed the effectiveness.

Design of CNN-based Braille Conversion and Voice Output Device for the Blind (시각장애인을 위한 CNN 기반의 점자 변환 및 음성 출력 장치 설계)

  • Seung-Bin Park;Bong-Hyun Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.87-92
    • /
    • 2023
  • As times develop, information becomes more diverse and methods of obtaining it become more diverse. About 80% of the amount of information gained in life is acquired through the visual sense. However, visually impaired people have limited ability to interpret visual materials. That's why Braille, a text for the blind, appeared. However, the Braille decoding rate of the blind is only 5%, and as the demand of the blind who want various forms of platforms or materials increases over time, development and product production for the blind are taking place. An example of product production is braille books, which seem to have more disadvantages than advantages, and unlike non-disabled people, it is true that access to information is still very difficult. In this paper, we designed a CNN-based Braille conversion and voice output device to make it easier for visually impaired people to obtain information than conventional methods. The device aims to improve the quality of life by allowing books, text images, or handwritten images that are not made in Braille to be converted into Braille through camera recognition, and designing a function that can be converted into voice according to the needs of the blind.

Performance Improvement Methods of a Spoken Chatting System Using SVM (SVM을 이용한 음성채팅시스템의 성능 향상 방법)

  • Ahn, HyeokJu;Lee, SungHee;Song, YeongKil;Kim, HarkSoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.6
    • /
    • pp.261-268
    • /
    • 2015
  • In spoken chatting systems, users'spoken queries are converted to text queries using automatic speech recognition (ASR) engines. If the top-1 results of the ASR engines are incorrect, these errors are propagated to the spoken chatting systems. To improve the top-1 accuracies of ASR engines, we propose a post-processing model to rearrange the top-n outputs of ASR engines using a ranking support vector machine (RankSVM). On the other hand, a number of chatting sentences are needed to train chatting systems. If new chatting sentences are not frequently added to training data, responses of the chatting systems will be old-fashioned soon. To resolve this problem, we propose a data collection model to automatically select chatting sentences from TV and movie scenarios using a support vector machine (SVM). In the experiments, the post-processing model showed a higher precision of 4.4% and a higher recall rate of 6.4% compared to the baseline model (without post-processing). Then, the data collection model showed the high precision of 98.95% and the recall rate of 57.14%.

VOC Summarization and Classification based on Sentence Understanding (구문 의미 이해 기반의 VOC 요약 및 분류)

  • Kim, Moonjong;Lee, Jaean;Han, Kyouyeol;Ahn, Youngmin
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.50-55
    • /
    • 2016
  • To attain an understanding of customers' opinions or demands regarding a companies' products or service, it is important to consider VOC (Voice of Customer) data; however, it is difficult to understand contexts from VOC because segmented and duplicate sentences and a variety of dialog contexts. In this article, POS (part of speech) and morphemes were selected as language resources due to their semantic importance regarding documents, and based on these, we defined an LSP (Lexico-Semantic-Pattern) to understand the structure and semantics of the sentences and extracted summary by key sentences; furthermore the LSP was introduced to connect the segmented sentences and remove any contextual repetition. We also defined the LSP by categories and classified the documents based on those categories that comprise the main sentences matched by LSP. In the experiment, we classified the VOC-data documents for the creation of a summarization before comparing the result with the previous methodologies.

Auditory-Perceptual and Acoustic Evaluation in Measuring Dysphonia Severity of Vocal Cord Paralysis (성대마비의 음성장애 측정을 위한 청지각적 및 음향학적 평가)

  • Kim, Geun-Hyo;Lee, Yeon-Woo;Park, Hee-June;Bae, In-Ho;Lee, Byung-Joo;Kwon, Soon-Bok
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.28 no.2
    • /
    • pp.106-111
    • /
    • 2017
  • Background and Objectives : The purpose of this study was to investigate the criterion-related concurrent validity of two standardized auditory-perceptual assessments and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in patients with vocal cord paralysis (VCP). Materials and Methods : Total 210 patients with VCP and 236 normal voice subjects were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk". A 2 second mid-vowel portion of the sustained vowel and two sentences (with 26 syllables) were recorded. And then voice samples were edited, concatenated, and analyzed according to Praat script. Two standardized auditory-perceptual assessment (GRBAS and CAPE-V) were performed by three raters. Results : The VCP group showed higher AVQI, Grade (G) and Overall Severity (OS) values than normal voice group. And the correlation among AVQI, G, and OS ranged from 0.904 to 0.926. In ROC curve analysis, cutoff values of AVQI, G, and OS were <3.79, <0.00, and <30.00, respectively, and the AUC of each analysis was over .89. Conclusion : AVQI and auditory evaluation can improve the early screening ability of VCP voice and help to establish effective diagnosis and treatment plan for VCP-related dysphonia.

  • PDF

An analysis study on the quality of article to improve the performance of hate comments discrimination (악성댓글 판별의 성능 향상을 위한 품사 자질에 대한 분석 연구)

  • Kim, Hyoung Ju;Min, Moon Jong;Kim, Pan Koo
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.71-79
    • /
    • 2021
  • One of the social aspects that changes as the use of the Internet becomes widespread is communication in online space. In the past, only one-on-one conversations were possible remotely, except when they were physically in the same space, but nowadays, technology has been developed to enable communication with a large number of people remotely through bulletin boards, communities, and social network services. Due to the development of such information and communication networks, life becomes more convenient, and at the same time, the damage caused by rapid information exchange is also constantly increasing. Recently, cyber crimes such as sending sexual messages or personal attacks to certain people with recognition on the Internet, such as not only entertainers but also influencers, have occurred, and some of those exposed to these cybercrime have committed suicide. In this paper, in order to reduce the damage caused by malicious comments, research a method for improving the performance of discriminate malicious comments through feature extraction based on parts-of-speech.

The Prosodic Changes of Korean English Learners in Robot Assisted Learning (로봇보조언어교육을 통한 초등 영어 학습자의 운율 변화)

  • In, Jiyoung;Han, JeongHye
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.4
    • /
    • pp.323-332
    • /
    • 2016
  • A robot's recognition and diagnosis of pronunciation and its speech are the most important interactions in RALL(Robot Assisted Language Learning). This study is to verify the effectiveness of robot TTS(Text to Sound) technology in assisting Korean English language learners to acquire a native-like accent by correcting the prosodic errors they commonly make. The child English language learners' F0 range and speaking rate in the 4th grade, a prosodic variable, will be measured and analyzed for any changes in accent. We compare whether robot with the currently available TTS technology appeared to be effective for the 4th graders and 1st graders who were not under the formal English learning with native speaker from the acoustic phonetic viewpoint. Two groups by repeating TTS of RALL responded to the speaking rate rather than F0 range.

Designing and Evaluating an Audiobook Service Model on Android Platform for the Visually-Impaired (안드로이드 플랫폼 기반 시각장애인용 음성도서 서비스 모델 구축 및 평가)

  • Jang, Won-Hong;Oh, Sam-Gyun
    • Journal of the Korean Society for information Management
    • /
    • v.32 no.2
    • /
    • pp.221-236
    • /
    • 2015
  • This paper describes the process and methodology followed in developing the Android-based LG Sangnam Audiobook service and an evaluation of its usefulness to the public. The methods included a survey of user needs, analysis of usage statistics, and user interviews. The study found that visually impaired users: 1) were greatly interested and willing to use smartphones if there were no barrier in cost and access; 2) preferred downloads to streaming services; 3) did not mind performance differences between real and TTS (text-to-speech) voices; 4) showed marked differences in book preferences according to age, 5) made about 14,000 downloads in 2014; and 6) indicated bookmarking and moving between pages and tables of content as the most important functions in using audiobooks.