• Title/Summary/Keyword: 음성 특성

Search Result 1,828, Processing Time 0.03 seconds

An Study on the Correlation between Sound Characteristics and Sasang Constitution by CSL (CSL을 통한 음향특성과 사상체질간의 상관성 연구)

  • Shin, Mi-ran;Kim, Dal-lae
    • Journal of Sasang Constitutional Medicine
    • /
    • v.11 no.1
    • /
    • pp.137-157
    • /
    • 1999
  • The purpose of this study is to help classifying Sasang Constitution through correlation with sound characteristic. This study was done it under the suppose that Sasang Constitution has correlation with sound spectrogram. The following result were obtained about correlation between sound spectrogram and Sasang Constitution by comparison and analysis 1. Soeumin answered his voice low tone, smooth and quiet in the survey. Soyangin answered his voice high, clear, fast and speaking random. Taeumin answered his voice low, thick and muddy. 2. Taeyangin was significantly slow compared with the others in the time of reading composition. Taeyangin was significantly slow compared with the others in Formant frequency 1. Taeyangin was significantly discriminated from Soeumin in Formant frequency 5. Taeyangin was significantly low compared with the others in Bandwidth 2. Soeumln was significantly low compared with Taeyangin in Pitch Maximum and Pitch Maximum-Pitch Minimum. Taeyangin was significantly high compared with the others in Energy mean. 3. In list of specification, the discrimination rate was higher than that by lists of 13 in the results of Multi-dimensional 4-class minimum-distance. The discrimination rate of three disposition except Soyangin was higher than that of four disposition in the results of One way ANOVA and Analysis of dis crimination in SPSS/PC+. In CART, the estimate rate of Sasang Constitution discrimination was higher than any other method. It is considered that there is a correlation between sound spectrogram and Sasang constitution according to the results. And method of Sasang constitution classification through sound spectrogram analysis can be one method as assistant for the objectification of Sasang constitution classification.

  • PDF

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

Speech Enhancement for Voice commander in Car environment (차량환경에서 음성명령어기 사용을 위한 음성개선방법)

  • 백승권;한민수;남승현;이봉호;함영권
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2004
  • In this paper, we present a speech enhancement method as a pre-processor for voice commander under car environment. For the friendly and safe use of voice commander in a running car, non-stationary audio signals such as music and non-candidate speech should be reduced. Ow technique is a two microphone-based one. It consists of two parts Blind Source Separation (BSS) and Kalman filtering. Firstly, BSS is operated as a spatial filter to deal with non-stationary signals and then car noise is reduced by kalman filtering as a temporal filter. Algorithm Performance is tested for speech recognition. And the results show that our two microphone-based technique can be a good candidate to a voice commander.

An Efficient Voice Activity Detection Method using Bi-Level HMM (Bi-Level HMM을 이용한 효율적인 음성구간 검출 방법)

  • Jang, Guang-Woo;Jeong, Mun-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.8
    • /
    • pp.901-906
    • /
    • 2015
  • We presented a method for Vad(Voice Activity Detection) using Bi-level HMM. Conventional methods need to do an additional post processing or set rule-based delayed frames. To cope with the problem, we applied to VAD a Bi-level HMM that has an inserted state layer into a typical HMM. And we used posterior ratio of voice states to detect voice period. Considering MFCCs(: Mel-Frequency Cepstral Coefficients) as observation vectors, we performed some experiments with voice data of different SNRs and achieved satisfactory results compared with well-known methods.

Analysis of Eigenvalues of Covariance Matrices of Speech Signals in Frequency Domain (음성 신호의 주파수 영역에서의 공분산행렬의 고유값 분석)

  • Kim, Seonil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.47-50
    • /
    • 2015
  • Speech Signals consist of signals of consonants and vowels, but the lasting time of vowels is much longer than that of consonants. It can be assumed that the correlations between signal blocks in speech signal is very high. Each speech signal is divided into blocks which have 128 speech data. FFT is applied to each block. Low frequency areas of the results of FFT is taken and Covariance matrix between blocks in a speech signal is extracted and finally eigenvalues of those matrix are obtained. It is studied that what the distribution of eigenvalues of various speech files is. The differences between speech signals and noise signals from cars are also studied.

  • PDF

Extending StarGAN-VC to Unseen Speakers Using RawNet3 Speaker Representation (RawNet3 화자 표현을 활용한 임의의 화자 간 음성 변환을 위한 StarGAN의 확장)

  • Bogyung Park;Somin Park;Hyunki Hong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.303-314
    • /
    • 2023
  • Voice conversion, a technology that allows an individual's speech data to be regenerated with the acoustic properties(tone, cadence, gender) of another, has countless applications in education, communication, and entertainment. This paper proposes an approach based on the StarGAN-VC model that generates realistic-sounding speech without requiring parallel utterances. To overcome the constraints of the existing StarGAN-VC model that utilizes one-hot vectors of original and target speaker information, this paper extracts feature vectors of target speakers using a pre-trained version of Rawnet3. This results in a latent space where voice conversion can be performed without direct speaker-to-speaker mappings, enabling an any-to-any structure. In addition to the loss terms used in the original StarGAN-VC model, Wasserstein distance is used as a loss term to ensure that generated voice segments match the acoustic properties of the target voice. Two Time-Scale Update Rule (TTUR) is also used to facilitate stable training. Experimental results show that the proposed method outperforms previous methods, including the StarGAN-VC network on which it was based.

Speech synthesis using acoustic Doppler signal (초음파 도플러 신호를 이용한 음성 합성)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2016
  • In this paper, a method synthesizing speech signal using the 40 kHz ultrasonic signals reflected from the articulatory muscles was introduced and performance was evaluated. When the ultrasound signals are radiated to articulating face, the Doppler effects caused by movements of lips, jaw, and chin observed. The signals that have different frequencies from that of the transmitted signals are found in the received signals. These ADS (Acoustic-Doppler Signals) were used for estimating of the speech parameters in this study. Prior to synthesizing speech signal, a quantitative correlation analysis between ADS and speech signals was carried out on each frequency bin. According to the results, the feasibility of the ADS-based speech synthesis was validated. ADS-to-speech transformation was achieved by the joint Gaussian mixture model-based conversion rules. The experimental results from the 5 subjects showed that filter bank energy and LPC (Linear Predictive Coefficient) cepstrum coefficients are the optimal features for ADS, and speech, respectively. In the subjective evaluation where synthesized speech signals were obtained using the excitation sources extracted from original speech signals, it was confirmed that the ADS-to-speech conversion method yielded 72.2 % average recognition rates.

The Influence of Perceived Value on Continuance Use Intention in Voice Commerce Context (비대면 음성 쇼핑의 인지된 가치, 지속이용의도에 미치는 영향 관계에 관한 연구)

  • Kim, Hyo-Jung
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.225-234
    • /
    • 2022
  • Voice commerce has emerged as a key channel for consumer searches and purchases. This study examines the continuance use intention of voice commerce, applying value-based adoption model. An online survey was conducted with 470 consumers who has experienced with voice commerce. As participants were who buys and purchases goods; or a user who uses food delivery service in voice commerce context. This study used SPSS 23.0 and Amos 23.0 for descriptive analysis, correlation analysis, confirmatory factor analysis, and structul equation modeling analysis. These reaults are as follows. First, usefuleness and response accuracy were significantly influenced the perceived value of voice commerce. Second, functional risk was significantly influenced the the perceived value of voice commerce. Third, perceived value was significantly influenced the continuance use intention of voice commerce. These results enhance understanding of voice commerce users and provide insight into the service provider of voice commerce.

Fundamental Frequency Estimation of Voiced Speech Signals Based on the Inflection Point Detection (변곡점 검출에 기반한 음성의 기본 주파수 추정)

  • Byeonggwan Iem
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.472-476
    • /
    • 2023
  • Fundamental frequency/pitch period are major characteristics of speech signals. They are used in many speech applications like speech coding, speech recognition, speaker identification, and so on. In this paper, some of inflection points are used to estimate the pitch which is the inverse of the fundamental frequency. The inflection points are defined as points where local maxima, local minima or the slope changes occur. The speech signal is preprocessed to remove unnecessary inflection points due to the high frequency components using a low pass filter. Only the inflection points from local maxima are used to get the pitch period. While the existing pitch estimation methods process speech signals in blockwise, the proposed method detects the inflection points in sample and produces the pitch period/fundamental frequency estimates along the time. Computer simulation shows the usefulness of the proposed method as a fundamental frequency estimator.

Enhancing Korean Alphabet Unit Speech Recognition with Neural Network-Based Alphabet Merging Methodology (한국어 자모단위 음성인식 결과 후보정을 위한 신경망 기반 자모 병합 방법론)

  • Solee Im;Wonjun Lee;Gary Geunbae Lee;Yunsu Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.659-663
    • /
    • 2023
  • 이 논문은 한국어 음성인식 성능을 개선하고자 기존 음성인식 과정을 자모단위 음성인식 모델과 신경망 기반 자모 병합 모델 총 두 단계로 구성하였다. 한국어는 조합어 특성상 음성 인식에 필요한 음절 단위가 약 2900자에 이른다. 이는 학습 데이터셋에 자주 등장하지 않는 음절에 대해서 음성인식 성능을 저하시키고, 학습 비용을 높이는 단점이 있다. 이를 개선하고자 음절 단위의 인식이 아닌 51가지 자모 단위(ㄱ-ㅎ, ㅏ-ㅞ)의 음성인식을 수행한 후 자모 단위 인식 결과를 음절단위의 한글로 병합하는 과정을 수행할 수 있다[1]. 자모단위 인식결과는 초성, 중성, 종성을 고려하면 규칙 기반의 병합이 가능하다. 하지만 음성인식 결과에 잘못인식된 자모가 포함되어 있다면 최종 병합 결과에 오류를 생성하고 만다. 이를 해결하고자 신경망 기반의 자모 병합 모델을 제시한다. 자모 병합 모델은 분리되어 있는 자모단위의 입력을 완성된 한글 문장으로 변환하는 작업을 수행하고, 이 과정에서 음성인식 결과로 잘못인식된 자모에 대해서도 올바른 한글 문장으로 변환하는 오류 수정이 가능하다. 본 연구는 한국어 음성인식 말뭉치 KsponSpeech를 활용하여 실험을 진행하였고, 음성인식 모델로 Wav2Vec2.0 모델을 활용하였다. 기존 규칙 기반의 자모 병합 방법에 비해 제시하는 자모 병합 모델이 상대적 음절단위오류율(Character Error Rate, CER) 17.2% 와 단어단위오류율(Word Error Rate, WER) 13.1% 향상을 확인할 수 있었다.

  • PDF