• Title/Summary/Keyword: Speech Spectrogram

Search Result 90, Processing Time 0.021 seconds

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

The Effect of Helium Gas Intake on the Characteristics Change of the Acoustic Organs for Voice Signal Analysis Parameter Application (음성신호 분석 요소의 적용으로 헬륨가스 흡입이 음성 기관의 특성 변화에 미치는 영향)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.397-404
    • /
    • 2011
  • In this paper, we were carried out experiments to apply parameter of voice analysis to measure changing characteristic articulator according to inhale the helium gas. The helium gas was used to overcome air embolism nitrogen gas to deal a fatal blow in body nitrogen gas by diver. However, the helium gas has been much trouble interpretation about abnormal voice of diver to cause squeaky voice of low articulation. Therefor, we was carried out experiments about pitch and spectrogram measurement, analysis based on to influence in acoustic organs before and after of inhaled helium gas.

The Study of Tonsil Affected Voice Quality after Tonsillectomy (편도적출술로 음성변화가 올 수 있는 편도 상태에 관한 연구)

  • 안철민;정덕희
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.9 no.1
    • /
    • pp.32-37
    • /
    • 1998
  • Tonsillectomy is the one of operation that is performed the most commonly in otolaryngology field. Many changes that include range of voice, tone, voice quality and resonance were made by tonsillectomy. Sometimes, any patients taken tonsillectomy has suffer from these voice problem after tonsillectomy. However there are less study for these problems until now. Then, we studied to find the anatomical findings that affected the voice quality when tonsillectomy was performed. We evaluated the voice in 2 groups, one is the group showed the normal pharyngeal space by using the transnasal fiberscopy, the other is group showed medially bulging tonsil at pharyngeal cavity by using same method, with perceptual evaluation, nasalance score, nasality, oral formant and nasal formant. We used the computerized speech analysis system, the nasometer and the spectrogram in the CSL program. We could not find any differences in perceptual evaluation between two groups. But objective measures were provided. Nasalance score and nasality on the nasometric analysis were increased significantly and oral formant on the spectrogram was changed singnificantly after tonsillectomy in Group 2. Authors thought medially bulging tonsil in the pharynx is able to affect the voice quality after tonsillectomy when we evaluted through the nasal cavity by the using of fiberscopy and this evaluation would be important especially in singers.

  • PDF

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

Coding History Detection of Speech Signal using Deep Neural Network (심층 신경망을 이용한 음성 신호의 부호화 이력 검출)

  • Cho, Hyo-Jin;Jang, Won;Shin, Seong-Hyeon;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.86-92
    • /
    • 2018
  • In this paper, we propose a method for coding history detection of digital speech signal. In digital speech communication and storage, the signal is encoded to reduce the number of bits. Therefore, when a speech signal waveform is given, we need to detect its coding history so that we can determine whether the signal is an original or an coded one, and if coded, determine the number of times of coding. In this paper, we propose a coding history detection method for 12.2kbps AMR codec in terms of original, single coding, and double coding. The proposed method extracts a speech-specific feature vector from the given speech, and models the feature vector using a deep neural network. We confirm that the proposed feature vector provides better performance in coding history detection than the feature vector computed from the general spectrogram.

Comparison of the pronunciation of word-initial liquids between generations in Korean (세대 간 어두 유음의 발음 양상 비교)

  • Yun, Eunmi;Sim, Hyeran;Park, Seegyoon;Kim, Hyungi;Kang, Jinseok
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.7-15
    • /
    • 2017
  • The purpose of this study was to investigate the different aspects of word-initial liquid sounds in Korean according to generations. Five women in their 50s and seven in their 20s participated in the experiment. We examined FL (formant of liquids) and voice sustained time by using Praat software. Three English native speakers were asked to judge the Korean speakers' recorded speech samples for marking [l] or [r] using evaluation sheet. The results of the two experiments revealed three important aspects. First, there was a statistically significant difference between the two groups in the FL of the words 'racket' and 'ruby.' Second, we found statistically significant differences in 'rhythm', 'ruby' and 'litter' from the measurement of the duration of the acoustic data. Third, there was no difference in pronunciation between the two groups according to the phonemes of the original language. The results of this study showed that it is difficult to say that the duration of word-initial liquids and the phoneme difference of the original language are indicators to distinguish the word-initial liquids between generations. Also, it was seen that the pronunciation of Korean word-initial liquid sounds varied across generations.

Experimental Phonetic Study of Kyungsang and Cholla Dialect Using Power Spectrum and Laryngeal Fiberscope (파워스펙트럼 및 후두내시경을 이용한 방언 음성(方言 音聲)의 실험적 연구(實驗的 硏究): 경상방언 및 전라방언을 중심으로)

  • Kim, Hyun-Gi;Lee, Eung-Young;Hong, Ki-Hwan
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.25-47
    • /
    • 2002
  • Human language activity in the information society has been developing the communication system between humans and machines. The aim of this study was to analyze dialectal speech in Korea. One hundred Kyungsang and one hundred Cholla informants participated in this study. A CSL and Flexible laryngeal fiberscope were used for analysis of the acoustic and glottal gestures of all the vowels and consonants. Test words were made on the picture cards and letter cards which contained each vowel and each consonant, respectively. The dialogue between the examiner and the informants was recorded in a question and answer manner. The acoustic results of two dialects were as follows: Kyungsang and Cholla informants showed neutralization between /e/ and /$\varepsilon$. However, the apertures of Kyungsang vowels /i, w, u, o/ were higher than those of Cholla vowels. The /wi/ and /$\varepsilon$/ of Kyungsang Diphthong vowels were shown as simple vowels /i/ and /$\varepsilon$/ in Cholla dialect. The VOT of Cholla dilaect was longer than that of Kyungsang dialect. The fricative frequence of Kyurlgsang dialect was about 1000Hz higher than that of Cholla dialect. The glottal widths on fiberscopic images showed that the consonant durations of Kyungsang and Cholla dialects were correlated all together with the acoustic duration on the spectrogram.

  • PDF

An Alteration Rule of Formant Transition for Improvement of Korean Demisyllable Based Synthesis by Rule (한국어 반음절단위 규칙합성의 개선을 위한 포만트천이의 변경규칙)

  • Lee, Ki-Young;Choi, Chang-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.98-104
    • /
    • 1996
  • This paper propose the alteraton rule to compensate a formant trasition of several connected vowels for improving an unnatural synthesized continuous speech which is concatenated by each demisyllable without coarticulated formant transition for use in dmisyllable based synthesis by rule. To fullfill each formant transition part, the database of 42 stationary vowels which are segmented from the stable part of each vowels is appended to the one of Korean demisyllables, and the resonance circuit used in formant synthesis is employed to change the formant frequency of speech signals. To evaluate the synthesied speech by this rule, we carried out the alteration rule for connected vowels of the synthesized speech based on demisyllable, and compare spectrogram and MOS tested scores with the original and the demisyllable based synthesized speech without this rule. The result shows that this proposed rule can synthesize the more natural speech.

  • PDF

An Interdisciplinary Study of A Leaders' Voice Characteristics: Acoustical Analysis and Members' Cognition

  • Hahm, SangWoo;Park, Hyungwoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4849-4865
    • /
    • 2020
  • The traditional roles of leaders are to influence members and motivate them to achieve shared goals in organizations. However, leaders such as top managers and chief executive officers, in practice, do not always directly meet or influence other company members. In fact, they tend to have the greatest impact on their members through formal speeches, company procedures, and the like. As such, official speech is directly related to the motivation of company employees. In an official speech, not only the contents of the speech, but also the voice characteristics of the speaker have an important influence on listeners, as the different vocal characteristics of a person can have different effects on the listener. Therefore, according to the voice characteristics of a leader, the cognition of the members may change, and, the degree to which the members are influenced and motivated will be different. This study identifies how members may perceive a speech differently according to the different voice characteristics of leaders in formal speeches. Further, different perceptions about voices will influence members' cognition of the leader, for example, in how trustworthy they appear. The study analyzed recorded speeches of leaders, and extracted features of their speaking style through digital speech signal analysis. Then, parameters were extracted and analyzed by the time domain, frequency domain, and spectrogram domain methods. We also analyzed the parameters for use in Natural Language Processing. We investigated which leader's voice characteristics had more influence on members or were more effective on them. A person's voice characteristics can be changed. Therefore, leaders who seek to influence members in formal speeches should have effective voice characteristics to motivate followers.

Speech emotion recognition using attention mechanism-based deep neural networks (주목 메커니즘 기반의 심층신경망을 이용한 음성 감정인식)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.407-412
    • /
    • 2017
  • In this paper, we propose a speech emotion recognition method using a deep neural network based on the attention mechanism. The proposed method consists of a combination of CNN (Convolution Neural Networks), GRU (Gated Recurrent Unit), DNN (Deep Neural Networks) and attention mechanism. The spectrogram of the speech signal contains characteristic patterns according to the emotion. Therefore, we modeled characteristic patterns according to the emotion by applying the tuned Gabor filters as convolutional filter of typical CNN. In addition, we applied the attention mechanism with CNN and FC (Fully-Connected) layer to obtain the attention weight by considering context information of extracted features and used it for emotion recognition. To verify the proposed method, we conducted emotion recognition experiments on six emotions. The experimental results show that the proposed method achieves higher performance in speech emotion recognition than the conventional methods.