• Title/Summary/Keyword: phonetic system

Search Result 313, Processing Time 0.02 seconds

Implementation of HMM Based Speech Recognizer with Medium Vocabulary Size Using TMS320C6201 DSP (TMS320C6201 DSP를 이용한 HMM 기반의 음성인식기 구현)

  • Jung, Sung-Yun;Son, Jong-Mok;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1E
    • /
    • pp.20-24
    • /
    • 2006
  • In this paper, we focused on the real time implementation of a speech recognition system with medium size of vocabulary considering its application to a mobile phone. First, we developed the PC based variable vocabulary word recognizer having the size of program memory and total acoustic models as small as possible. To reduce the memory size of acoustic models, linear discriminant analysis and phonetic tied mixture were applied in the feature selection process and training HMMs, respectively. In addition, state based Gaussian selection method with the real time cepstral normalization was used for reduction of computational load and robust recognition. Then, we verified the real-time operation of the implemented recognition system on the TMS320C6201 EVM board. The implemented recognition system uses memory size of about 610 kbytes including both program memory and data memory. The recognition rate was 95.86% for ETRI 445DB, and 96.4%, 97.92%, 87.04% for three kinds of name databases collected through the mobile phones.

Classification of Phornographic Videos Using Audio Information (오디오 신호를 이용한 음란 동영상 판별)

  • Kim, Bong-Wan;Choi, Dae-Lim;Bang, Man-Won;Lee, Yong-Ju
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.207-210
    • /
    • 2007
  • As the Internet is prevalent in our life, harmful contents have been increasing on the Internet, which has become a very serious problem. Among them, pornographic video is harmful as poison to our children. To prevent such an event, there are many filtering systems which are based on the keyword based methods or image based methods. The main purpose of this paper is to devise a system that classifies the pornographic videos based on the audio information. We use Mel-Cepstrum Modulation Energy (MCME) which is modulation energy calculated on the time trajectory of the Mel-Frequency cepstral coefficients (MFCC) and MFCC as the feature vector and Gaussian Mixture Model (GMM) as the classifier. With the experiments, the proposed system classified the 97.5% of pornographic data and 99.5% of non-pornographic data. We expect the proposed method can be used as a component of the more accurate classification system which uses video information and audio information simultaneously.

  • PDF

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2E
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.

The design and implementation of Web Component for Korean to Roman transcription (국어 로마자 전사표기 웹 컴포넌트 설계 및 구현)

  • Kim Hongsop
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.71-76
    • /
    • 2004
  • In this paper, a web-based automatic transcription component is designed and implemented for mechanical conversion of revised Korean-to-Romanization notation rule. Specially. we proposes system architecture and algorithms that transcript Korean to Roman automatically after transliterate Korean to phonetic symbol applying phonological principles. The components operate under the web server's script mechanism. and the dictionary for exceptional usage is designed as an accessorial function supported either operating at web server internally or externally. The overall system architecture is presented by UML. specification and pseudo code. The proposed architecture can be implemented in encapsulated service by object oriented component and that can be easily adapted and modified on the internet environment and this system may have many advantages to improve efficiency, library reuse. extensibility at software development.

  • PDF

A System of English Vowel Transcription Based on Acoustic Properties (영어 모음음소의 표기체계에 관한 연구)

  • 김대원
    • Proceedings of the KSLP Conference
    • /
    • 2003.11a
    • /
    • pp.170-173
    • /
    • 2003
  • There are more than five systems for transcribing English vowels. Because of this diversity, teachers of English and students are confronted with not a little problems with the English vowel symbols used in the English-Korean dictionaries, English text books, books for Phonetics and Phonology. This study was designed to suggest criterions for the phonemic transcription of English vowels on the basis of phonetic properties of the vowels and a system of English vowel transcription based on the criterions in order to minimize the problems with inter-system differences. A speaker (phonetician) of RP English uttered a series of isolated minimal pairs containing the vowels in question. The suggested vowel symbols are as follows: 1) Simple vowels : /i:/ in beat, /I/ bit, /$\varepsilon$/ bet,/${\ae}$/ bat, /a:/ father, /Dlla/ bod, /$\jmath$:/ bawd, /u/ put, /u:/ boot /$\Lambda$/ but, and /$\partial$/ about /$\Im$:ll$\Im$:r/ bird. 2) Diphthongs : /aI/ in bite, /au/ bout, /$\jmath$I/ boy, /$\Im$ullou/ boat, /er/ bait, /e$\partial$lle$\partial$r/ air, /u$\partial$llu$\partial$r/ poor, /i$\partial$lli$\partial$r/ beer. Where two symbols are shown corresponding to the vowel in a single word, the first is appropriate for most speakers of British English and the second for most speakers of American English.

  • PDF

The Vowel System of American English and Its Regional Variation (미국 영어 모음 체계의 몇 가지 지역 방언적 차이)

  • Oh, Eun-Jin
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.69-87
    • /
    • 2006
  • This study aims to describe the vowel system of present-day American English and to discuss some of its phonetic variations due to regional differences. Fifteen speakers of American English from various regions of the United States produced the monophthongs of English. The vowel duration and the frequencies of the first and the second formant were measured. The results indicate that the distinction between the vowels [c] and [a] has been merged in most parts of the U.S. except in some speakers from eastern and southeastern parts of the U.S., resulting in the general loss of phonemic distinction between the vowels. The phonemic merger of the two vowels can be interpreted as the result of the relatively small functional load of the [c]-[a] contrast, and the smaller back vowel space in comparison to the front vowel space. The study also shows that the F2 frequencies of the high back vowel [u] were extremely high in most of the speakers from the eastern region of the U.S., resulting in the overall reduction of their acoustic space for high vowels. From the viewpoint of the Adaptive Dispersion Theory proposed by Liljencrants & Lindblom (1972) and Lindblom (1986), the high back vowel [u] appeared to have been fronted in order to satisfy the economy of articulatory gesture to some extent without blurring any contrast between [i] and [u] in the high vowel region.

  • PDF

A System of English Vowel Transcription Based on Acoustic Properties (영어 모음음소의 표기체계에 관한 연구)

  • Kim, Dae-Won
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.73-79
    • /
    • 2003
  • There are more than five systems for transcribing English vowels. Because of this diversity, teachers of English and students are confronted with not a little problems with the English vowel symbols used in the English-Korean dictionaries, English text books, books for Phonetics and Phonology. This study was designed to suggest criterions for the phonemic transcription of English vowels on the basis of phonetic properties of the vowels and a system of English vowel transcription based on the criterions in order to minimize the problems with inter-system differences. A speaker (phonetician) of RP English uttered a series of isolated minimal pairs containing the vowels in question. The suggested vowel symbols are as follows: (1) Simple vowels: /i:/ in beat, /I/ bit, /$\varepsilon$/ bet, /${\ae}$ bat, /a:/ father, /Dlla/ bod, /c:/ bawd, /$\upsilon$ put, /u:/ boot /$\Lambda$/ but, and /e/ about /$\varepsilon:ll3:r$/ bird. (2) Diphthongs: /aI/ in bite, /a$\upsilon$/ bout, /cI/ boy, /3$\upsilon$llo$\upsilon$/ boat, /eI/ bait, /eelleer/ air, /uelluer/ poor, /iellier/ beer. Where two symbols are shown corresponding to the vowel in a single word, the first is appropriate for most speakers of British English and the second for most speakers of American English.

  • PDF

Implementation of Text-to-Audio Visual Speech Synthesis Using Key Frames of Face Images (키프레임 얼굴영상을 이용한 시청각음성합성 시스템 구현)

  • Kim MyoungGon;Kim JinYoung;Baek SeongJoon
    • MALSORI
    • /
    • no.43
    • /
    • pp.73-88
    • /
    • 2002
  • In this paper, for natural facial synthesis, lip-synch algorithm based on key-frame method using RBF(radial bases function) is presented. For lips synthesizing, we make viseme range parameters from phoneme and its duration information that come out from the text-to-speech(TTS) system. And we extract viseme information from Av DB that coincides in each phoneme. We apply dominance function to reflect coarticulation phenomenon, and apply bilinear interpolation to reduce calculation time. At the next time lip-synch is performed by playing the synthesized images obtained by interpolation between each phonemes and the speech sound of TTS.

  • PDF

A Study on the Characteristics of Segmental-Feature HMM (분절특징 HMM의 특성에 관한 연구)

  • Yun Young-Sun;Jung Ho-Young
    • MALSORI
    • /
    • no.43
    • /
    • pp.163-178
    • /
    • 2002
  • In this paper, we discuss the characteristics of Segmental-Feature HMM and summarize previous studies of SFHMM. There are several approaches to reduce the number of parameters in the previous studies. However, if the number of parameters decreased, the performance of systems also fell. Therefore, we consider the fast computation approach with preserving the same number of parameters. In this paper, we present the new segment comparison method to speed up the computation of SFHMM without loss of performance. The proposed method uses the three-frame calculation rather than the full(five) frames in the given segment. The experimental results show that the performance of the proposed system is better than that of the previous studies.

  • PDF

A Korean TTS System for Educational Purpose (교육용 한국어 TTS 플랫폼 개발)

  • Lee Jungchul;Lee Sangho
    • MALSORI
    • /
    • no.50
    • /
    • pp.41-50
    • /
    • 2004
  • Recently, there has been considerable progress in the natural language processing and digital signal processing components and this progress has led to the improved synthetic speech qualify of many commercial TTS systems. But there still remain many obstacles to overcome for the practical application of TTS. To resolve the problems, the cooperative research among the related areas is highly required and a common Korean TTS platform is essential to promote these activities. This platform offers a general framework for building Korean speech synthesis systems and a full C/C++ source for modules supports to implement and test his own algorithm. In this paper we described the aspect of a Korean TTS platform to be developed and a developing plan.

  • PDF