• Title/Summary/Keyword: Speaker

Search Result 1,679, Processing Time 0.033 seconds

Comparison of Korean Speech De-identification Performance of Speech De-identification Model and Broadcast Voice Modulation (음성 비식별화 모델과 방송 음성 변조의 한국어 음성 비식별화 성능 비교)

  • Seung Min Kim;Dae Eol Park;Dae Seon Choi
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.56-65
    • /
    • 2023
  • In broadcasts such as news and coverage programs, voice is modulated to protect the identity of the informant. Adjusting the pitch is commonly used voice modulation method, which allows easy voice restoration to the original voice by adjusting the pitch. Therefore, since broadcast voice modulation methods cannot properly protect the identity of the speaker and are vulnerable to security, a new voice modulation method is needed to replace them. In this paper, using the Lightweight speech de-identification model as the evaluation target model, we compare speech de-identification performance with broadcast voice modulation method using pitch modulation. Among the six modulation methods in the Lightweight speech de-identification model, we experimented on the de-identification performance of Korean speech as a human test and EER(Equal Error Rate) test compared with broadcast voice modulation using three modulation methods: McAdams, Resampling, and Vocal Tract Length Normalization(VTLN). Experimental results show VTLN modulation methods performed higher de-identification performance in both human tests and EER tests. As a result, the modulation methods of the Lightweight model for Korean speech has sufficient de-identification performance and will be able to replace the security-weak broadcast voice modulation.

Experiences of Military Prostitute and Im/Possibility of Representation: Re-writing History from a Postcolonial Feminist Perspective (기지촌 여성의 경험과 윤리적 재현의 불/가능성: 탈식민주의 페미니스트 역사 쓰기)

  • Lee, Na-Young
    • Women's Studies Review
    • /
    • v.28 no.1
    • /
    • pp.79-120
    • /
    • 2011
  • The purpose of this paper is to illuminate the implication of feminist oral history from a postcolonial feminist perspective as critically reexamining the relationship between hearer and speaker, representer and narrator, the said and the unsaid, and secrecy and silence. Based upon oral (life) history of a U.S. military prostitute (yanggongju), I tried to show the experiences of a historically-excluded and marginalized 'Other,' and then critically reevaluate the meaning of encountering 'Other', not just through the research process but also in the post/colonial society in Korea. The narrative of an old woman in the "kijichon" (a formal prostitute in U.S. military base) shows how woman has navigated the boundaries between inevitability/coincidence, the enforced/the voluntary, prostitution/intimacy, and military prostitute/military bride while continually negotiating as well as having conflict with various myths and ideologies of the 'normative woman,' 'nationhood,' and 'normal family.' In addition, her narrative which causes the rupture of our own stereotypical images of a military prostitute not only proves the possibility of reconstructing the self-identity of a subaltern woman, but also redirects the research focus from the research object to the research subject (ourselves). Consequently, the implication in feminist oral history is that feminist researchers who whish to represent the experiences of other should first inquire 'what/how we can hear,' 'why we want to know others,' and 'who we are,' while simultaneously asking if subaltern woman can speak.

Perception of military officers towards the military adaptation of adults who stutter and the associated factors (말더듬 성인의 군대 적응 정도에 대한 군지휘관의 인식 양상 및 관련 요인 분석)

  • Hye-rin Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.55-64
    • /
    • 2023
  • This study investigated the factors influencing the perceptions that military officers can harbor regarding persons who stutter in terms of how well they can adapt to the army. In total, 89 participants were randomly assigned to each of the three different conditions ("fluent speech"=23, "mildly stuttered speech"=34, and "severely stuttered speech"=32). Subsequently, the participants were asked to listen and rate each sample in terms of "the speaker's communicative functioning (i.e., speech fluency, intelligibility, naturalness, speech rate), personal traits (i.e., likeability, anxiety level, intellectual level, and sociability), and the perceived degree of the adaptability to the army." The results showed that significant differences were found between "fluent speech" and "severely stuttered speech" in the perceived communicative functionings and the perceived adaptability to the army. Moreover, there were significant differences in the same variables between "mildly stuttered speech" and "severely stuttered speech." However, there were no significant differences between "mildly stuttered speech" and "fluent speech." Following the conducting of the Pearson correlation test, strong correlations were also found between the perceived communicative functionings, in particular "speech fluency," and the perceived adaptability to the army. Those results can be employed to argue that the communicative functionings can serve as factors which influence the perceptions of persons who stutter in terms of how well they can adapt to the army. Further discussion has taken place regarding the relationship between the perceived communicative functionings and the perceived adaptability to the army.

A perceptual study on the correlation between the meaning of Korean polysemic ending and its boundary tone (동형다의 종결어미의 의미와 경계성조의 상관성에 대한 지각연구)

  • Youngsook Yune
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.1-10
    • /
    • 2022
  • The Korean polysemic ending '-(eu)lgeol' can has two different meanings, 'guess' and 'regret'. These are expressed by different boundary-tone types: a rising tone for guess, a falling one for regret. Therefore the sentence-final boundary-tone type is the most salient prosodic feature. However, besides tone type, the pitch difference between the final and penultimate syllables of '-(eu)lgeol' can also affect semantic discrimination. To investigate this aspect, we conducted a perception test using two sentences that were morphologically and syntactically identical. These two sentences were spoken using different boundary-tone types by a Korean native speaker. From these two sentences, the experimental stimuli were generated by artificially raising or lowering the pitch of the boundary syllable by 1Qt while fixing the pitch of the penultimate syllable and boundary-tone type. Thirty Korean native speakers participated in three levels of perceptual test, in which they were asked to mark whether the experimental sentences they listened to were perceived as guess or regret. The results revealed that regardless of boundary-tone types, the larger the pitch difference between the final and penultimate syllable in the positive direction, the more likely it is perceived as guess, and the smaller the pitch difference in the negative direction, the more likely it is perceived as regret.

Improvement of Character-net via Detection of Conversation Participant (대화 참여자 결정을 통한 Character-net의 개선)

  • Kim, Won-Taek;Park, Seung-Bo;Jo, Geun-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.241-249
    • /
    • 2009
  • Recently, a number of researches related to video annotation and representation have been proposed to analyze video for searching and abstraction. In this paper, we have presented a method to provide the picture elements of conversational participants in video and the enhanced representation of the characters using those elements, collectively called Character-net. Because conversational participants are decided as characters detected in a script holding time, the previous Character-net suffers serious limitation that some listeners could not be detected as the participants. The participants who complete the story in video are very important factor to understand the context of the conversation. The picture elements for detecting the conversational participants consist of six elements as follows: subtitle, scene, the order of appearance, characters' eyes, patterns, and lip motion. In this paper, we present how to use those elements for detecting conversational participants and how to improve the representation of the Character-net. We can detect the conversational participants accurately when the proposed elements combine together and satisfy the special conditions. The experimental evaluation shows that the proposed method brings significant advantages in terms of both improving the detection of the conversational participants and enhancing the representation of Character-net.

Age classification of emergency callers based on behavioral speech utterance characteristics (발화행태 특징을 활용한 응급상황 신고자 연령분류)

  • Son, Guiyoung;Kwon, Soonil;Baik, Sungwook
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.6
    • /
    • pp.96-105
    • /
    • 2017
  • In this paper, we investigated the age classification from the speaker by analyzing the voice calls of the emergency center. We classified the adult and elderly from the call center calls using behavioral speech utterances and SVM(Support Vector Machine) which is a machine learning classifier. We selected two behavioral speech utterances through analysis of the call data from the emergency center: Silent Pause and Turn-taking latency. First, the criteria for age classification selected through analysis based on the behavioral speech utterances of the emergency call center and then it was significant(p <0.05) through statistical analysis. We analyzed 200 datasets (adult: 100, elderly: 100) by the 5 fold cross-validation using the SVM(Support Vector Machine) classifier. As a result, we achieved 70% accuracy using two behavioral speech utterances. It is higher accuracy than one behavioral speech utterance. These results can be suggested age classification as a new method which is used behavioral speech utterances and will be classified by combining acoustic information(MFCC) with new behavioral speech utterances of the real voice data in the further work. Furthermore, it will contribute to the development of the emergency situation judgment system related to the age classification.

The fundamental frequency (f0) distribution of Korean speakers in a dialogue corpus using Praat and R (Praat과 R로 분석한 한국인 대화 음성 말뭉치의 fundamental frequency(f0)값 분포)

  • Byunggon Yang
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.17-25
    • /
    • 2023
  • This study examines the fundamental frequency(f0) distribution of 2,740 Korean speakers in a dialogue speech corpus. Praat and R were used for the collection and analysis of acoustical f0 data after removing extreme values considering the interquartile f0 range of the intonational phrases produced by each individual speaker. Results showed that the average f0 value of all speakers was 185 Hz and the median value was 187 Hz. The f0 data showed a positively skewed distribution of 0.11, and the kurtosis was -0.09, which is close to the normal distribution. The pitch values of daily conversations varied in the range of 238 Hz. Further examination of the male and female groups showed distinct median f0 values: 114 Hz for males and 199 Hz for females. A t-test between the two groups yielded a significant difference. The skewness representing the distribution shape was 1.24 for the male group and 0.58 for the female group. The kurtosis was 5.21 and 3.88 for the male and female groups, and the male group values appeared leptokurtic. A regression analysis between the median f0 and age yielded a slope of 0.15 for the male group and -0.586 for the female group, which indicated a divergent relationship. In conclusion, a normative f0 distribution of different Korean age and sex groups can be examined in the conversational speech corpus recorded by a massive number of participants. However, more rigorous data might be required to define a relation between age and f0 values.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Automatic Recognition of Pitch Accent Using Distributed Time-Delay Recursive Neural Network (분산 시간지연 회귀신경망을 이용한 피치 악센트 자동 인식)

  • Kim Sung-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.277-281
    • /
    • 2006
  • This paper presents a method for the automatic recognition of pitch accents over syllables. The method that we propose is based on the time-delay recursive neural network (TDRNN). which is a neural network classifier with two different representation of dynamic context: the delayed input nodes allow the representation of an explicit trajectory F0(t) along time. while the recursive nodes provide long-term context information that reflects the characteristics of pitch accentuation in spoken English. We apply the TDRNN to pitch accent recognition in two forms: in the normal TDRNN. all of the prosodic features (pitch. energy, duration) are used as an entire set in a single TDRNN. while in the distributed TDRNN. the network consists of several TDRNNs each taking a single prosodic feature as the input. The final output of the distributed TDRNN is weighted sum of the output of individual TDRNN. We used the Boston Radio News Corpus (BRNC) for the experiments on the speaker-independent pitch accent recognition. π 1e experimental results show that the distributed TDRNN exhibits an average recognition accuracy of 83.64% over both pitch events and non-events.

Non-Keyword Model for the Improvement of Vocabulary Independent Keyword Spotting System (가변어휘 핵심어 검출 성능 향상을 위한 비핵심어 모델)

  • Kim, Min-Je;Lee, Jung-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.319-324
    • /
    • 2006
  • We Propose two new methods for non-keyword modeling to improve the performance of speaker- and vocabulary-independent keyword spotting system. The first method is decision tree clustering of monophone at the state level instead of monophone clustering method based on K-means algorithm. The second method is multi-state multiple mixture modeling at the syllable level rather than single state multiple mixture model for the non-keyword. To evaluate our method, we used the ETRI speech DB for training and keyword spotting test (closed test) . We also conduct an open test to spot 100 keywords with 400 sentences uttered by 4 speakers in an of fce environment. The experimental results showed that the decision tree-based state clustering method improve 28%/29% (closed/open test) than the monophone clustering method based K-means algorithm in keyword spotting. And multi-state non-keyword modeling at the syllable level improve 22%/2% (closed/open test) than single state model for the non-keyword. These results show that two proposed methods achieve the improvement of keyword spotting performance.