• Title/Summary/Keyword: Speech sound segmentation

Search Result 11, Processing Time 0.023 seconds

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

A Method of Sound Segmentation in Time-Frequency Domain Using Peaks and Valleys in Spectrogram for Speech Separation (음성 분리를 위한 스펙트로그램의 마루와 골을 이용한 시간-주파수 공간에서 소리 분할 기법)

  • Lim, Sung-Kil;Lee, Hyon-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.8
    • /
    • pp.418-426
    • /
    • 2008
  • In this paper, we propose an algorithm for the frequency channel segmentation using peaks and valleys in spectrogram. The frequency channel segments means that local groups of channels in frequency domain that could be arisen from the same sound source. The proposed algorithm is based on the smoothed spectrum of the input sound. Peaks and valleys in the smoothed spectrum are used to determine centers and boundaries of segments, respectively. To evaluate a suitableness of the proposed segmentation algorithm before that the grouping stage is applied, we compare the synthesized results using ideal mask with that of proposed algorithm. Simulations are performed with mixed speech signals with narrow band noises, wide band noises and other speech signals.

Effects of the Orthographic Representation on Speech Sound Segmentation in Children Aged 5-6 Years (5~6세 아동의 철자표상이 말소리분절 과제 수행에 미치는 영향)

  • Maeng, Hyeon-Su;Ha, Ji-Wan
    • Journal of Digital Convergence
    • /
    • v.14 no.6
    • /
    • pp.499-511
    • /
    • 2016
  • The aim of this study was to find out effect of the orthographic representation on speech sound segmentation performance. Children's performances of the orthographic representation task and the speech sound segmentation task had positive correlation in words of phoneme-grapheme correspondence and negative correlation in words of phoneme-grapheme non-correspondence. In the case of words of phoneme-grapheme correspondence, there was no difference in performance ability between orthographic representation high level group and low level group, while in the case of words of phoneme-grapheme non-correspondence, the low level group's performance was significantly better than the high level group's. The most frequent errors of both groups were orthographic conversion errors and such errors were significantly more noticeable in the high level group. This study suggests that from the time of learning orthographic knowledge, children utilize orthographic knowledge for the performance of phonological awareness tasks.

Research about auto-segmentation via SVM (SVM을 이용한 자동 음소분할에 관한 연구)

  • 권호민;한학용;김창근;허강인
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2220-2223
    • /
    • 2003
  • In this paper we used Support Vector Machines(SVMs) recently proposed as the loaming method, one of Artificial Neural Network, to divide continuous speech into phonemes, an initial, medial, and final sound, and then, performed continuous speech recognition from it. Decision boundary of phoneme is determined by algorithm with maximum frequency in a short interval. Recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme divided by eye-measurement. From experiment we confirmed that the method, SVMs, we proposed is more effective in an initial sound than Gaussian Mixture Models(GMMs).

  • PDF

Korean Speech Segmentation and Recognition by Frame Classification via GMM (GMM을 이용한 프레임 단위 분류에 의한 우리말 음성의 분할과 인식)

  • 권호민;한학용;고시영;허강인
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.18-21
    • /
    • 2003
  • In general it has been considered to be the difficult problem that we divide continuous speech into short interval with having identical phoneme quality. In this paper we used Gaussian Mixture Model (GMM) related to probability density to divide speech into phonemes, an initial, medial, and final sound. From them we peformed continuous speech recognition. Decision boundary of phonemes is determined by algorithm with maximum frequency in a short interval. Recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme divided by eye-measurement. For the experiments result we confirmed that the method we presented is relatively superior in auto-segmentation in korean speech.

  • PDF

Phoneme segmentation and Recognition using Support Vector Machines (Support Vector Machines에 의한 음소 분할 및 인식)

  • Lee, Gwang-Seok;Kim, Deok-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.981-984
    • /
    • 2010
  • In this paper, we used Support Vector Machines(SVMs) as the learning method, one of Artificial Neural Network, to segregated from the continuous speech into phonemes, an initial, medial, and final sound, and then, performed continuous speech recognition from it. A Decision boundary of phoneme is determined by algorithm with maximum frequency in a short interval. Speech recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme segregated from the eye-measurement. From the simulation results, we confirmed that the method, SVMs, we proposed is more effective in an initial sound than Gaussian Mixture Models(GMMs).

  • PDF

A Study on Consonant/Vowel/Unvoiced Consonant Phonetic Value Segmentation and Recognition of Korean Isolated Word Speech (한국어 고립 단어 음성의 자음/모음/유성자음 음가 분할 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1964-1972
    • /
    • 2000
  • For the Korean language, on acoustics, it creates a different form of phonetic value not a phoneme by its own peculiar property. Therefore, the construction of extended recognition system for understanding Korean language should be created with a study of the Korean rule-based system, before it can be used as post-processing of the Korean recognition system. In this paper, text-based Korean rule-based system featuring Korean peculiar vocal sound changing rule is constructed. and based on the text-based phonetic value result of the system constructed, a preliminary phonetic value segmentation border points with non-uniform blocks are extracted in Korean isolated word speech. Through the way of merge and recognition of the non-uniform blocks between the extracted border points, recognition possibility of Korean voice as the form of the phonetic vale has been investigated.

  • PDF

Retrieval of Player Event in Golf Videos Using Spoken Content Analysis (음성정보 내용분석을 통한 골프 동영상에서의 선수별 이벤트 구간 검색)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.674-679
    • /
    • 2009
  • This paper proposes a method of player event retrieval using combination of two functions: detection of player name in speech information and detection of sound event from audio information in golf videos. The system consists of indexing module and retrieval module. At the indexing time audio segmentation and noise reduction are applied to audio stream demultiplexed from the golf videos. The noise-reduced speech is then fed into speech recognizer, which outputs spoken descriptors. The player name and sound event are indexed by the spoken descriptors. At search time, text query is converted into phoneme sequences. The lists of each query term are retrieved through a description matcher to identify full and partial phrase hits. For the retrieval of the player name, this paper compares the results of word-based, phoneme-based, and hybrid approach.

A Study of Correlation Between Phonological Awareness and Word Identification Ability of Hearing Impaired Children (청각장애 아동의 음운인식 능력과 단어확인 능력의 상관연구)

  • Kim, Yu-Kyung;Kim, Mun-Jung;Ahn, Jong-Bok;Seok, Dong-Il
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • Hearing impairment children possess poor underlying perceptual knowledge of the sound system and show delayed development of segmental organization of that system. The purpose of this study was to investigate the relationship between phonological awareness ability and word identification ability in hearing impaired children. 14 children with moderately severe hearing loss participated in this study. All tasks were individually administered. Phonological awareness tests consisted of syllable blending, syllable segmentation, syllable deletion, body-coda discrimination, phoneme blending, phoneme segmentation and phoneme deletion. Close-set Monosyllabic Words(12 items) and lists 1 and 2 of open-set Monosyllabic Words in EARS-K were examined for word identification. Results of this study were as follows: First, from the phonological awareness task, the close-set word identification showed a high positive correlation with the coda discrimination, phoneme blending and phoneme deletion. The open-set word identification showed a high positive correlation with phoneme blending, phoneme deletion and phoneme segmentation. Second, from the level of phonological awareness, the close-set word identification showed a high positive correlation with the level of body-coda awareness and phoneme awareness while the open-set word identification showed a high positive correlation only with the level of phoneme awareness.

  • PDF

Ortho-phonic Alphabet Creation by the Musical Theory and its Segmental Algorithm (악리론으로 본 정음창제와 정음소 분절 알고리즘)

  • Chin, Yong-Ohk;Ahn, Cheong-Keung
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.49-59
    • /
    • 2001
  • The phoneme segmentation is a very difficult problem in speech sound processing because it has found out segmental algorithm in many kinds of allophone and coarticulation's trees. Thus system configuration for the speech recognition and voice retrieval processing has a complex system structure. To solve it, we discuss a possibility of new segmental algorithm, which is called the minus a thirds one or plus in tripartitioning(삼분손익) of twelve temporament(12 율려), first proposed by Prof. T. S. Han. It is close to oriental and western musical theory. He also has suggested a 3 consonant and 3 vowel phonemes in Hunminjungum(훈민정음) invented by the King Sejong in the 15th century. In this paper, we suggest to newly name it as ortho-phonic phoneme(OPP/정음소), which carries the meaning of 'the absoluteness and independency'. OPP also is acceptable to any other languages, for example IPA. Lastly we know that this algorithm is constantly applicable to the global language and is very useful to construct a voice recognition and retrieval structuring engineering.

  • PDF