• Title/Summary/Keyword: Speaker identification

Search Result 152, Processing Time 0.022 seconds

A Study on Improvement of Speaker Identification with Time axis Scaling (시간축 스케일링에 의한 화자 식별 개선에 관한 연구)

  • 정형교
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06c
    • /
    • pp.123-126
    • /
    • 1998
  • 기존의 DTW를 이용한 화자 인식 시스템은 DTW의 단점이라 할 수 있는 과다한 계산량을 갖는다는 문제점을 갖고 있다. 따라서 본 논문은 텍스트 종속 화자 인식 시스템에서 피치 분포도를 갖는 개별 화자의 lDTW를 수행하기 전에 시간축 스케일링을 이용한 전처리로 인식시의 계산량을 감소시키는 과정을 미리 수행할 후 감소된 기준패턴들의 입력신호에 대해서만 DTW를 수행하는 방법을 제안하고자 한다. 제안한 방법을 실험하였을 경우 87.5%의 평균 처리 시간이 감소하였고, 더불어 인식률 감소는 거의 없었다.

  • PDF

Speaker Identification using Neural Network (신경회로망을 이용한 화자 식별)

  • 황영수
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.383-387
    • /
    • 1998
  • 신경회로망을 이용한 화자 식별에 대한 논문으로서, 화자 식별을 하기 위하여, 신경회로망중 패턴 인식의 성능이 우수하다는 ARTMAP을 이용하여 화자 식별 성능을 검토하였다. 본 논문에서 화자 식별 실험에 사용한 데이터는 25.6ms 와 51.2ms 구간의 모음들을 사용하였다. 실험 결과, 입력 모음에 따라 80.7%에서 98%까지의 인식률을 보였으며, 모음 '이'의 인식 결과가 화자 식별시 가장 좋은 결과를 보였다.

  • PDF

Speaker Identification using Incremental Neural Network and LPCC (Incremental Neural Network 과 LPCC을 이용한 화자인식)

  • 허광승;박창현;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.341-344
    • /
    • 2002
  • 음성은 화자들의 특징을 가지고 있다. 이 논문에서는 신경망에 기초한 Incremental Learning을 이용하여 화자인식시스템을 소개한다. 컴퓨터를 통하여 녹음된 문장들은 FFT를 거치면서 Frequency 영역으로 바뀌고, 모음들의 특징을 가지고 있는 Formant를 이용하여 모음들을 추출한다. 추출된 모음들은 LPC처리를 통하여 화자의 특성을 가지고 있는 Coefficient값들을 얻는다. LPCC과정과 Vector Quantization을 통해 10개의 특징 점들은 학습을 위한 Input으로 들어가고 화자 수에 따라 증가되는 Hidden Layer와 Output Layer들을 가지고 있는 신경망을 통해 화자인식을 수행한다.

Analysis of unfairness of artificial intelligence-based speaker identification technology (인공지능 기반 화자 식별 기술의 불공정성 분석)

  • Shin Na Yeon;Lee Jin Min;No Hyeon;Lee Il Gu
    • Convergence Security Journal
    • /
    • v.23 no.1
    • /
    • pp.27-33
    • /
    • 2023
  • Digitalization due to COVID-19 has rapidly developed artificial intelligence-based voice recognition technology. However, this technology causes unfair social problems, such as race and gender discrimination if datasets are biased against some groups, and degrades the reliability and security of artificial intelligence services. In this work, we compare and analyze accuracy-based unfairness in biased data environments using VGGNet (Visual Geometry Group Network), ResNet (Residual Neural Network), and MobileNet, which are representative CNN (Convolutional Neural Network) models of artificial intelligence. Experimental results show that ResNet34 showed the highest accuracy for women and men at 91% and 89.9%in Top1-accuracy, while ResNet18 showed the slightest accuracy difference between genders at 1.8%. The difference in accuracy between genders by model causes differences in service quality and unfair results between men and women when using the service.

A Case Study on Closed Captions: Focusing on on Netflix (넷플릭스 <오징어 게임> 폐쇄자막 연구)

  • Jeong, Sua;Lee, Jimin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.279-285
    • /
    • 2024
  • This study aims to evaluate the accuracy and completeness of Korean and English closed captions for Netflix's "Squid Game" and to present implications based on the findings. To achieve this, the closed captioning guidelines of the U.S. Federal Communications Commission, DCMP, and the Korea Communications Commission were identified and analyzed. The analysis of the subtitle of the entire "Squid Game" series reveals that, while Korean closed captions accurately present slangs and titles, they present non-existent information in speaker identification. In English closed captions, speaker identification guidelines are well followed, but omissions of slangs and title mistranslations are observed. In terms of completeness, both Korean and English closed captions are found to omit certain audio parts. To address these issues, the study suggests strengthening the QA process, establishing a system to communicate original text problems during translation, and utilizing general English subtitles.

Phonological Status of Korean /w/: Based on the Perception Test

  • Kang, Hyun-Sook
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.13-23
    • /
    • 2012
  • The sound /w/ has been traditionally regarded as an independent segment in Korean regardless of the phonological contexts in which it occurs. There have been, however, some questions regarding whether it is an independent phoneme in /CwV/ context (cf. Kang 2006). The present pilot study examined how Korean /w/ is realized in $/S^*wV/$ context by performing some perception tests. Our assumption was that if Korean /w/ is a part of the preceding complex consonant like $/C^w/$, it should be more or less uniformly articulated and perceived as such. If /w/ is an independent segment, it will be realized with speaker variability. Experiments I and II examined the identification rates as "labialized" of the spliced original stimuli of $/S^*-V/$ and $/S^{w*}-^wV/$, and the cross-spliced stimuli $/S^{w*}-V/$ and $/S^*-^wV/$. The results showed that round qualities of /w/ are perceived at significantly different temporal point with speaker and context variability. We therefore conclude that /w/ in $/S^*wV/$ context is an independent segment, not a part of the preceding segment. Full-scale examination of the production test in the future should be performed to verify the conclusion we suggested in this paper.

Noise Effects on Foreign Language Learning (소음이 외국어 학습에 미치는 영향)

  • Lim, Eun-Su;Kim, Hyun-Gi;Kim, Byung-Sam;Kim, Jong-Kyo
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.197-217
    • /
    • 1999
  • In a noisy class, the acoustic-phonetic features of the teacher and the perceptual features of learners are changed comparison with a quiet environment. Acoustical analyses were carried out on a set of French monosyllables consisting of 17 consonants and three vowel /a, e, i/, produced by 1 male speaker talking in quiet and in 50, 60 and 70 dB SPL of masking noise on headphone. The results of the acoustic analyses showed consistent differences in energy and formant center frequency amplitude of consonants and vowels, $F_1$ frequency of vowel and duration of voiceless stops suggesting the increase of vocal effort. The perceptual experiments in which 18 undergraduate female students learning French served as the subjects, were conducted in quiet and in 50, 60 dB of masking noise. The identification scores on consonants were higher in Lombard speech than in normal speech, suggesting that the speaker's vocal effort is useful to overcome the masking effect of noise. And, with increased noise level, the perceptual response to the French consonants given had a tendency to be complex and the subjective reaction score on the noise using the vocabulary representative of 'unpleasant' sensation to be higher. And, in the point of view on the L2(second language) acquisition, the influence of L1 (first language) on L2 examined in the perceptual result supports the interference theory.

  • PDF

Comparison of Korean Speech De-identification Performance of Speech De-identification Model and Broadcast Voice Modulation (음성 비식별화 모델과 방송 음성 변조의 한국어 음성 비식별화 성능 비교)

  • Seung Min Kim;Dae Eol Park;Dae Seon Choi
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.56-65
    • /
    • 2023
  • In broadcasts such as news and coverage programs, voice is modulated to protect the identity of the informant. Adjusting the pitch is commonly used voice modulation method, which allows easy voice restoration to the original voice by adjusting the pitch. Therefore, since broadcast voice modulation methods cannot properly protect the identity of the speaker and are vulnerable to security, a new voice modulation method is needed to replace them. In this paper, using the Lightweight speech de-identification model as the evaluation target model, we compare speech de-identification performance with broadcast voice modulation method using pitch modulation. Among the six modulation methods in the Lightweight speech de-identification model, we experimented on the de-identification performance of Korean speech as a human test and EER(Equal Error Rate) test compared with broadcast voice modulation using three modulation methods: McAdams, Resampling, and Vocal Tract Length Normalization(VTLN). Experimental results show VTLN modulation methods performed higher de-identification performance in both human tests and EER tests. As a result, the modulation methods of the Lightweight model for Korean speech has sufficient de-identification performance and will be able to replace the security-weak broadcast voice modulation.

A Method for Clustering Noun Phrases into Coreferents for the Same Person in Novels Translated into Korean (한국어 번역 소설에서 인물명 명사구의 동일인물 공통참조 클러스터링 방법)

  • Park, Taekeun;Kim, Seung-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.3
    • /
    • pp.533-542
    • /
    • 2017
  • Novels include various character names, depending on the genre and the spatio-temporal background of the novels and the nationality of characters. Besides, characters and their names in a novel are created by the author's pen and imagination. As a result, any proper noun dictionary cannot include all kinds of character names. In addition, the novels translated into Korean have character names consisting of two or more nouns (such as "Harry Potter"). In this paper, we propose a method to extract noun phrases for character names and to cluster the noun phrases into coreferents for the same character name. In the extraction of noun phrases, we utilize KKMA morpheme analyzer and CPFoAN character identification tool. In clustering the noun phrases into coreferents, we construct a directed graph with the character names extracted by CPFoAN and the extracted noun phrases, and then we create name sets for characters by traversing connected subgraphs in the directed graph. With four novels translated into Korean, we conduct a survey to evaluate the proposed method. The results show that the proposed method will be useful for speaker identification as well as for constructing the social network of characters.

Modified HMM Decoder based on Observation Confidence for Speaker Identification (화자인식을 위한 관측신뢰도 기반 변형된 HMM 디코더)

  • Tariquzzaman, Md.;Min, So-Hui;Kim, Jin-Yeong;Na, Seung-Yu
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.443-446
    • /
    • 2007
  • 음성신호는 잡음 또는 전송 채널의 특성에 의하여 왜곡되고, 왜곡된 음성은 음성인식 및 화자인식의 성능을 크게 저하시킨다. 이러한 문제점을 극복하기 위해 본 논문에서는 Gaussian mixture model (GMM)에 적용된 신호대잡음비 (SNR)기반 신뢰도 가중 기법[1][2]을 Hidden Markov model(HMM) 디코더에 변형하여 적용하였다. HMM 디코더 변형은 HMM 상태별 관측확률을 논문 [1]에서 제시된 신뢰도로 가중함으로써 이루어졌다. 제안한 방법의 성능을 확인하기 위해 ETRI에서 만든 한국어 화자인식용 휴대폰 음성 DB를 사용하여 문맥종속 화자식별 실험을 하였다. 실험결과 기존 방법에 비해 제안한 방법의 화자인식률이 크게 향상됨을 확인 할 수 있었다.

  • PDF