• Title/Summary/Keyword: Automatic Speech Recognition

Search Result 213, Processing Time 0.028 seconds

Development of an Embedded System for Ship선s Steering Gear using Voice Recognition Module (음성인식모듈을 이용한 선박조타용 임베디드 시스템 개발)

  • Park, Gyei-Kark;Seo, Ki-Yeol;Hong, Tae-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.604-609
    • /
    • 2004
  • Recently, various studies had been made for automatic control system of small ships, in order to improve maneuvering and to reduce labor and working on board. To achieve efficient operation of small ships, it had been accomplished to rapid development of automatic technique, but the ship operation had been more complicated because of the need to handle various gauges and instruments. To solve these problems, there are examples to be applied to the speech information processing technologies which is one of the human interface methods in the system operation of ship, but the implementation of definite system is still incomplete. Therefore, the purpose of this paper is to implement the control system for ship steering using the voice recognition module.

A Review on Advanced Methodologies to Identify the Breast Cancer Classification using the Deep Learning Techniques

  • Bandaru, Satish Babu;Babu, G. Rama Mohan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.420-426
    • /
    • 2022
  • Breast cancer is among the cancers that may be healed as the disease diagnosed at early times before it is distributed through all the areas of the body. The Automatic Analysis of Diagnostic Tests (AAT) is an automated assistance for physicians that can deliver reliable findings to analyze the critically endangered diseases. Deep learning, a family of machine learning methods, has grown at an astonishing pace in recent years. It is used to search and render diagnoses in fields from banking to medicine to machine learning. We attempt to create a deep learning algorithm that can reliably diagnose the breast cancer in the mammogram. We want the algorithm to identify it as cancer, or this image is not cancer, allowing use of a full testing dataset of either strong clinical annotations in training data or the cancer status only, in which a few images of either cancers or noncancer were annotated. Even with this technique, the photographs would be annotated with the condition; an optional portion of the annotated image will then act as the mark. The final stage of the suggested system doesn't need any based labels to be accessible during model training. Furthermore, the results of the review process suggest that deep learning approaches have surpassed the extent of the level of state-of-of-the-the-the-art in tumor identification, feature extraction, and classification. in these three ways, the paper explains why learning algorithms were applied: train the network from scratch, transplanting certain deep learning concepts and constraints into a network, and (another way) reducing the amount of parameters in the trained nets, are two functions that help expand the scope of the networks. Researchers in economically developing countries have applied deep learning imaging devices to cancer detection; on the other hand, cancer chances have gone through the roof in Africa. Convolutional Neural Network (CNN) is a sort of deep learning that can aid you with a variety of other activities, such as speech recognition, image recognition, and classification. To accomplish this goal in this article, we will use CNN to categorize and identify breast cancer photographs from the available databases from the US Centers for Disease Control and Prevention.

A Study on the Spoken Korean Citynames Using Multi-Layered Perceptron of Back-Propagation Algorithm (오차 역전파 알고리즘을 갖는 MLP를 이용한 한국 지명 인식에 대한 연구)

  • Song, Do-Sun;Lee, Jae-Gheon;Kim, Seok-Dong;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.5-14
    • /
    • 1994
  • This paper is about an experiment of speaker-independent automatic Korean spoken words recognition using Multi-Layered Perceptron and Error Back-propagation algorithm. The object words are 50 citynames of D.D.D local numbers. 43 of those are 2 syllables and the rest 7 are 3 syllables. The words were not segmented into syllables or phonemes, and some feature components extracted from the words in equal gap were applied to the neural network. That led independent result on the speech duration, and the PARCOR coefficients calculated from the frames using linear predictive analysis were employed as feature components. This paper tried to find out the optimum conditions through 4 differerent experiments which are comparison between total and pre-classified training, dependency of recognition rate on the number of frames and PAROCR order, recognition change due to the number of neurons in the hidden layer, and the comparison of the output pattern composition method of output neurons. As a result, the recognition rate of $89.6\%$ is obtaimed through the research.

  • PDF

Pronunciation Variation Patterns of Loanwords Produced by Korean and Grapheme-to-Phoneme Conversion Using Syllable-based Segmentation and Phonological Knowledge (한국인 화자의 외래어 발음 변이 양상과 음절 기반 외래어 자소-음소 변환)

  • Ryu, Hyuksu;Na, Minsu;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.139-149
    • /
    • 2015
  • This paper aims to analyze pronunciation variations of loanwords produced by Korean and improve the performance of pronunciation modeling of loanwords in Korean by using syllable-based segmentation and phonological knowledge. The loanword text corpus used for our experiment consists of 14.5k words extracted from the frequently used words in set-top box, music, and point-of-interest (POI) domains. At first, pronunciations of loanwords in Korean are obtained by manual transcriptions, which are used as target pronunciations. The target pronunciations are compared with the standard pronunciation using confusion matrices for analysis of pronunciation variation patterns of loanwords. Based on the confusion matrices, three salient pronunciation variations of loanwords are identified such as tensification of fricative [s] and derounding of rounded vowel [ɥi] and [$w{\varepsilon}$]. In addition, a syllable-based segmentation method considering phonological knowledge is proposed for loanword pronunciation modeling. Performance of the baseline and the proposed method is measured using phone error rate (PER)/word error rate (WER) and F-score at various context spans. Experimental results show that the proposed method outperforms the baseline. We also observe that performance degrades when training and test sets come from different domains, which implies that loanword pronunciations are influenced by data domains. It is noteworthy that pronunciation modeling for loanwords is enhanced by reflecting phonological knowledge. The loanword pronunciation modeling in Korean proposed in this paper can be used for automatic speech recognition of application interface such as navigation systems and set-top boxes and for computer-assisted pronunciation training for Korean learners of English.

Developing an Embedded Method to Recognize Human Pilot Intentions In an Intelligent Cockpit Aids for the Pilot Decision Support System

  • Cha, U-Chang
    • Journal of the Ergonomics Society of Korea
    • /
    • v.17 no.3
    • /
    • pp.23-39
    • /
    • 1998
  • Several recent aircraft accidents occurred due to goal conflicts between human and machine actors. To facilitate the management of the cockpit activities considering these observations. a computational aid. the Agenda Manager (AM) has been developed for use in simulated cockpit environments. It is important to know pilot intentions performing cockpit operations accurately to improve AM performance. Without accurate knowledge of pilot goals or intentions, the information from AM may lead to the wrong direction to the pilot who is using the information. To provide a reliable flight simulation environment regarding goal conflicts. a pilot goal communication method (GCM) was developed to facilitate accurate recognition of pilot goals. Embedded within AM, the GCM was used to recognize pilot goals and to declare them to the AM. Two approaches to the recognition of pilots goals were considered: (1) The use of an Automatic Speech Recognition (ASR) system to recognize overtly or explicitly declared pilot goals. and (2) inference of covertly or implicitly declared pilot goals via the use of an intent inferencing mechanism. The integrated mode of these two methods could overcome the covert goal mis-understanding by use of overt GCM. And also could it overcome workload concern with overt mode by the use of covert GCM. Through simulated flight environment experimentation with real pilot subjects, the proposed GCM has demonstrated its capability to recognize pilot intentions with a certain degree of accuracy and to handle incorrectly declared goals. and was validated in terms of subjective workload and pilot flight control performance. The GCM communicating pilot goals were implemented within the AM to provide a rich environment for the study of human-machine interactions in the supervisory control of complex dynamic systems.

  • PDF

Utilizing Korean Ending Boundary Tones for Accurately Recognizing Emotions in Utterances (발화 내 감정의 정밀한 인식을 위한 한국어 문미억양의 활용)

  • Jang In-Chang;Lee Tae-Seung;Park Mikyoung;Kim Tae-Soo;Jang Dong-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.505-511
    • /
    • 2005
  • Autonomic machines interacting with human should have capability to perceive the states of emotion and attitude through implicit messages for obtaining voluntary cooperation from their clients. Voice is the easiest and most natural way to exchange human messages. The automatic systems capable to understanding the states of emotion and attitude have utilized features based on pitch and energy of uttered sentences. Performance of the existing emotion recognition systems can be further improved withthe support of linguistic knowledge that specific tonal section in a sentence is related with the states of emotion and attitude. In this paper, we attempt to improve recognition rate of emotion by adopting such linguistic knowledge for Korean ending boundary tones into anautomatic system implemented using pitch-related features and multilayer perceptrons. From the results of an experiment over a Korean emotional speech database, the improvement of $4\%$ is confirmed.

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.

A Study on the Automatic Lexical Acquisition for Multi-lingustic Speech Recognition (다국어 음성 인식을 위한 자동 어휘모델의 생성에 대한 연구)

  • 지원우;윤춘덕;김우성;김석동
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.6
    • /
    • pp.434-442
    • /
    • 2003
  • Software internationalization, the process of making software easier to localize for specific languages, has deep implications when applied to speech technology, where the goal of the task lies in the very essence of the particular language. A greatdeal of work and fine-tuning has gone into language processing software based on ASCII or a single language, say English, thus making a port to different languages difficult. The inherent identity of a language manifests itself in its lexicon, where its character set, phoneme set, pronunciation rules are revealed. We propose a decomposition of the lexicon building process, into four discrete and sequential steps. For preprocessing to build a lexical model, we translate from specific language code to unicode. (step 1) Transliterating code points from Unicode. (step 2) Phonetically standardizing rules. (step 3) Implementing grapheme to phoneme rules. (step 4) Implementing phonological processes.

SVM Based Speaker Verification Using Sparse Maximum A Posteriori Adaptation

  • Kim, Younggwan;Roh, Jaeyoung;Kim, Hoirin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.277-281
    • /
    • 2013
  • Modern speaker verification systems based on support vector machines (SVMs) use Gaussian mixture model (GMM) supervectors as their input feature vectors, and the maximum a posteriori (MAP) adaptation is a conventional method for generating speaker-dependent GMMs by adapting a universal background model (UBM). MAP adaptation requires the appropriate amount of input utterance due to the number of model parameters to be estimated. On the other hand, with limited utterances, unreliable MAP adaptation can be performed, which causes adaptation noise even though the Bayesian priors used in the MAP adaptation smooth the movements between the UBM and speaker dependent GMMs. This paper proposes a sparse MAP adaptation method, which is known to perform well in the automatic speech recognition area. By introducing sparse MAP adaptation to the GMM-SVM-based speaker verification system, the adaptation noise can be mitigated effectively. The proposed method utilizes the L0 norm as a regularizer to induce sparsity. The experimental results on the TIMIT database showed that the sparse MAP-based GMM-SVM speaker verification system yields a 42.6% relative reduction in the equal error rate with few additional computations.

  • PDF

Design of Smart Home Network System based on ZigBee Topology (ZigBee 토폴로지를 이용한 스마트 홈 네트워크 시스템 설계)

  • Liu, Dan;Kim, Gwang-Jun;Lee, Jin-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.3
    • /
    • pp.537-543
    • /
    • 2012
  • Smart home System is shirt-sleeve, the automatic control systems, computer network system and network communication technology in the integration of network intelligent home control system. Intelligent household will let users have a more convenient means to management of domestic equipment, for example, through the house, wireless remote control, touch screen phone and Internet or speech recognition control household devices, more can perform scene operation, make more equipment form linkage. In this paper, we propose the intelligent household various kinds of equipment within each other can communication, do not need to user command according to different state interactive operation, thus to bring the greatest degree of user efficient and convenient, comfortable and safe.