• Title/Summary/Keyword: Speaker recognition

Search Result 555, Processing Time 0.027 seconds

Pitch Period Detection Algorithm Using Rotation Transform of AMDF (AMDF의 회전변환을 이용한 피치 주기 검출 알고리즘)

  • Seo, Hyun-Soo;Bae, Sang-Bum;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1019-1022
    • /
    • 2005
  • As recent information communication technology is rapidly developed, a lot of researches related to speech signal processing have been processed. So pitch period is applied as important factor to many application fields such as speech recognition, speaker identification, speech analysis and synthesis. Therefore, many algorithms related to pitch detection have been proposed in time domain and frequency domain and AMDF(average magnitude difference function) which is one of pitch detection algorithms in time domain chooses time interval from valley to valley as pitch period. But, in selection of valley point to detect pitch period, complexity of the algorithm is increased. So in this paper we proposed pitch detection algorithm using rotation transform of AMDF, that taking the global minimum valley point as pitch period and established a threshold about the phoneme in beginning portion, to exclude pitch period selection. and compared existing methods with proposed method through simulation.

  • PDF

A Study on the Robust Pitch Period Detection Algorithm in Noisy Environments (소음환경에 강인한 피치주기 검출 알고리즘에 관한 연구)

  • Seo Hyun-Soo;Bae Sang-Bum;Kim Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.481-484
    • /
    • 2006
  • Pitch period detection algorithms are applied to various speech signal processing fields such as speech recognition, speaker identification, speech analysis and synthesis. Furthermore, many pitch detection algorithms of time and frequency domain have been studied until now. AMDF(average magnitude difference function) ,which is one of pitch period detection algorithms, chooses a time interval from the valley point to the valley point as the pitch period. AMDF has a fast computation capacity, but in selection of valley point to detect pitch period, complexity of the algorithm is increased. In order to apply pitch period detection algorithms to the real world, they have robust prosperities against generated noise in the subway environment etc. In this paper we proposed the modified AMDF algorithm which detects the global minimum valley point as the pitch period of speech signals and used speech signals of noisy environments as test signals.

  • PDF

Major Character Extraction using Character-Net (Character-Net을 이용한 주요배역 추출)

  • Park, Seung-Bo;Kim, Yoo-Won;Jo, Geun-Sik
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.85-102
    • /
    • 2010
  • In this paper, we propose a novel method of analyzing video and representing the relationship among characters based on their contexts in the video sequences, namely Character-Net. As a huge amount of video contents is generated even in a single day, the searching and summarizing technologies of the contents have also been issued. Thereby, a number of researches have been proposed related to extracting semantic information of video or scenes. Generally stories of video, such as TV serial or commercial movies, are made progress with characters. Accordingly, the relationship between the characters and their contexts should be identified to summarize video. To deal with these issues, we propose Character-Net supporting the extraction of major characters in video. We first identify characters appeared in a group of video shots and subsequently extract the speaker and listeners in the shots. Finally, the characters are represented by a form of a network with graphs presenting the relationship among them. We present empirical experiments to demonstrate Character-Net and evaluate performance of extracting major characters.

Comparison of Classification Performance Between Adult and Elderly Using Acoustic and Linguistic Features from Spontaneous Speech (자유대화의 음향적 특징 및 언어적 특징 기반의 성인과 노인 분류 성능 비교)

  • SeungHoon Han;Byung Ok Kang;Sunghee Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.365-370
    • /
    • 2023
  • This paper aims to compare the performance of speech data classification into two groups, adult and elderly, based on the acoustic and linguistic characteristics that change due to aging, such as changes in respiratory patterns, phonation, pitch, frequency, and language expression ability. For acoustic features we used attributes related to the frequency, amplitude, and spectrum of speech voices. As for linguistic features, we extracted hidden state vector representations containing contextual information from the transcription of speech utterances using KoBERT, a Korean pre-trained language model that has shown excellent performance in natural language processing tasks. The classification performance of each model trained based on acoustic and linguistic features was evaluated, and the F1 scores of each model for the two classes, adult and elderly, were examined after address the class imbalance problem by down-sampling. The experimental results showed that using linguistic features provided better performance for classifying adult and elderly than using acoustic features, and even when the class proportions were equal, the classification performance for adult was higher than that for elderly.

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

Fundamental Frequency Estimation of Voiced Speech Signals Based on the Inflection Point Detection (변곡점 검출에 기반한 음성의 기본 주파수 추정)

  • Byeonggwan Iem
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.472-476
    • /
    • 2023
  • Fundamental frequency/pitch period are major characteristics of speech signals. They are used in many speech applications like speech coding, speech recognition, speaker identification, and so on. In this paper, some of inflection points are used to estimate the pitch which is the inverse of the fundamental frequency. The inflection points are defined as points where local maxima, local minima or the slope changes occur. The speech signal is preprocessed to remove unnecessary inflection points due to the high frequency components using a low pass filter. Only the inflection points from local maxima are used to get the pitch period. While the existing pitch estimation methods process speech signals in blockwise, the proposed method detects the inflection points in sample and produces the pitch period/fundamental frequency estimates along the time. Computer simulation shows the usefulness of the proposed method as a fundamental frequency estimator.

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF

Applying Social Strategies for Breakdown Situations of Conversational Agents: A Case Study using Forewarning and Apology (대화형 에이전트의 오류 상황에서 사회적 전략 적용: 사전 양해와 사과를 이용한 사례 연구)

  • Lee, Yoomi;Park, Sunjeong;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.59-70
    • /
    • 2018
  • With the breakthrough of speech recognition technology, conversational agents have become pervasive through smartphones and smart speakers. The recognition accuracy of speech recognition technology has developed to the level of human beings, but it still shows limitations on understanding the underlying meaning or intention of words, or understanding long conversation. Accordingly, the users experience various errors when interacting with the conversational agents, which may negatively affect the user experience. In addition, in the case of smart speakers with a voice as the main interface, the lack of feedback on system and transparency was reported as the main issue when the users using. Therefore, there is a strong need for research on how users can better understand the capability of the conversational agents and mitigate negative emotions in error situations. In this study, we applied social strategies, "forewarning" and "apology", to conversational agent and investigated how these strategies affect users' perceptions of the agent in breakdown situations. For the study, we created a series of demo videos of a user interacting with a conversational agent. After watching the demo videos, the participants were asked to evaluate how they liked and trusted the agent through an online survey. A total of 104 respondents were analyzed and found to be contrary to our expectation based on the literature study. The result showed that forewarning gave a negative impression to the user, especially the reliability of the agent. Also, apology in a breakdown situation did not affect the users' perceptions. In the following in-depth interviews, participants explained that they perceived the smart speaker as a machine rather than a human-like object, and for this reason, the social strategies did not work. These results show that the social strategies should be applied according to the perceptions that user has toward agents.

Considerations for Helping Korean Students Write Better Technical Papers in English (한국 대학생들의 영어 기술 논문 작성 능력 향상을 위한 고찰)

  • Kim, Yee-Jin;Pak, Bo-Young;Lee, Chang-Ha;Kim, Moon-Kyum
    • Journal of Engineering Education Research
    • /
    • v.10 no.3
    • /
    • pp.64-78
    • /
    • 2007
  • For Korean researchers, English is essential. In fact, this is the case for any researcher who is a non-native English speaker, as recognition and success is predicated on being published, while publications that reach the broadest audiences are in English. Unfortunately, university science and engineering programs in Korea often do not provide formal coursework to help students attain greater competence in English composition. Aggravating this situation is the general lack of literature covering this specific pedagogical issue. While there is plenty of information to help native speakers with technical writing and much covering general English composition for EFL learners, there is very little information available to help EFL learners become better technical writers. Thus, the purpose of this report is twofold. First, as most Korean educators in science and engineering are not well acquainted with pedagogical issues of EFL writing, this report provides a general introduction to some relevant issues. It reviews the importance of contrastive rhetoric as well as some considerations for choosing the appropriate teaching approach, class arrangement, and use of computer assisted learning tools. Secondly, a course proposal is discussed. Based on a review of student writing samples as well as student responses to a self-assessment questionnaire, the proposed course is intended to balance the needs of Korean EFL learners to develop grammar, process, and genre skills involved in technical writing. Although, the scope of this report is very modest, by sharing the considerations made towards the development of an EFL technical writing course it seeks to provide a small example to a field that is perhaps lacking examples.

A study on the Jeonwonsasiga of Shin Gye Young - focused on the 'Jesuk' - (신계영의 <전원사시가> 고찰 - '제석(除夕)'의 의미를 중심으로 -)

  • Kim Sang-Jean
    • Sijohaknonchong
    • /
    • v.24
    • /
    • pp.113-137
    • /
    • 2006
  • This paper is a study on the Jeonwonsasiga of Shin Gye Young. Jeonwonsasiga is as a out of Sasiga-Line(사시가계) Yeonsijo(연시조), the singing Spring. Summer. Autumn, Winter and Jesuk. The meanwhile, a study on the Jeonwonsasiga is in the relations with other literary works, or a history of Siga in 17th century. A out of specific character's of Jeonwonsasiga is the singing Spring, Summer, Autumn, Winter and Jesuk about order. Jesuk with a New year's Eve, is the stand for a watch night.'Chun Ha Chu Dong(Spring, Summer, Autumn, Winter, 춘하추동)' are included in the Sasi of One Year(일년 사시), And the Jesuk is included in the Sasi of one day(하루 사시). Buy the way, Jeonwonsasiga make an equal about all of two. And then, this paper is focused on the 'Jesuk'. The role of Jesuk is coexistence together with the Sasi of One Year and the Sasi of one day. That is to say, Jeonwonsasiga's singing from the 1st works to the 8st works are the Sasi of One Year in the surface. But, singing the Sasi of One Day of 'Dan Ju Mo Ya(단주모야)' in the inside story. And Jesuk is a finished role the mean's of literary works. And this provide an opportunity which can be speaker's recognition of Jeonwon (전원). The Jeonwonsasiga of ten works are have a value of Yeonsijo in continyal structure. And so. it is to be watched sense of proportion of Yeonsijo in the 17th century.

  • PDF