• Title/Summary/Keyword: Spectrogram

Search Result 234, Processing Time 0.031 seconds

Music and Voice Separation Using Log-Spectral Amplitude Estimator Based on Kernel Spectrogram Models Backfitting (커널 스펙트럼 모델 backfitting 기반의 로그 스펙트럼 진폭 추정을 적용한 배경음과 보컬음 분리)

  • Lee, Jun-Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.227-233
    • /
    • 2015
  • In this paper, we propose music and voice separation using kernel sptectrogram models backfitting based on log-spectral amplitude estimator. The existing method separates sources based on the estimate of a desired objects by training MSE (Mean Square Error) designed Winer filter. We introduce rather clear music and voice signals with application of log-spectral amplitude estimator, instead of adaptation of MSE which has been treated as an existing method. Experimental results reveal that the proposed method shows higher performance than the existing methods.

Footstep Detection and Classification Algorithms based Seismic Sensor (진동센서 기반 걸음걸이 검출 및 분류 알고리즘)

  • Kang, Youn Joung;Lee, Jaeil;Bea, Jinho;Lee, Chong Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.162-172
    • /
    • 2015
  • In this paper, we propose an adaptive detection algorithm of footstep and a classification algorithm for activities of the detected footstep. The proposed algorithm can detect and classify whole movement as well as individual and irregular activities, since it does not use continuous footstep signals which are used by most previous research. For classifying movement, we use feature vectors obtained from frequency spectrum from FFT, CWT, AR model and image of AR spectrogram. With SVM classifier, we obtain classification accuracy of single footstep activities over 90% when feature vectors using AR spectrogram image are used.

Development of Speech-Language Therapy Program kMIT for Aphasic Patients Following Brain Injury and Its Clinical Effects (뇌 손상 후 실어증 환자의 언어치료 프로그램 kMIT의 개발 및 임상적 효과)

  • Kim, Hyun-Gi;Kim, Yun-Hee;Ko, Myoung-Hwan;Park, Jong-Ho;Kim, Sun-Sook
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.237-252
    • /
    • 2002
  • MIT has been applied for nonfluent aphasic patients on the basis of lateralization of brain hemisphere. However, its applications for different languages have some inquiry for aphasic patients because of prosodic and rhythmic differences. The purpose of this study is to develop the Korean Melodic Intonation Therapy program using personal computer and its clinical effects for nonfluent aphasic patients. The algorithm was composed to voice analog signal, PCM, AMDF, Short-time autocorrelation function and center clipping. The main menu contains pitch, waveform, sound intensity and speech files on window. Aphasic patients' intonation patterns overlay on selected kMIT patterns. Three aphasic patients with or without kMIT training participated in this study. Four affirmative sentences and two interrogative sentences were uttered on CSL by stimulus of ST. VOT, VD, Hold and TD were measured on Spectrogram. In addition, articulation disorders and intonation patterns were evaluated objectively on spectrogram. The results indicated that nonfluent aphasic patients with kMIT training group showed some clinical effects of speech intelligibility based on VOT, TD values, articulation evaluation and prosodic pattern changes.

  • PDF

A Comparative Study of Western Singer's Voice and a Pansori Singer's Voice Based on Glottal Image and Acoustic Characteristics (성대형태 및 음향발현에서 성악 발성 및 판소리 발성의 비교 연구)

  • Kim, Sun-Sook
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.165-177
    • /
    • 2004
  • Western singers voice have been studied in music science since the early 20th century. However, Korean traditional singers voice have not yet been studied scientifically. This study is to find the physiological and acoustic characteristics of Pansori singers voices. Western singers participated for comparative purposes. Ten western singers and ten Pansori singers participated in this study. The subjects spoke and sung seven simple vowels /a, e, i, o, u, c, w/. An analysis of Glottal image was done by Scope View and acoustic characteristics of speech and singing voice were analyzed by CSL. The results are as follows: (1) Glottal gestures of Pansori singers showed asymmetric vocal folds. (2) Singing vowel formants of Pansori singers showed breathiness based on Spectrogram. (3) Music formant of western singers appeared in around 3kHz area, however, Pansori singers formant appeared in low frequency area. Modulation of vibrato showed 6 frequency per sec in case of western singers. Pansori singers showed no deep modulation of vibrato on spectrogram.

  • PDF

The Study of Tonsil Affected Voice Quality after Tonsillectomy (편도적출술로 음성변화가 올 수 있는 편도 상태에 관한 연구)

  • 안철민;정덕희
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.9 no.1
    • /
    • pp.32-37
    • /
    • 1998
  • Tonsillectomy is the one of operation that is performed the most commonly in otolaryngology field. Many changes that include range of voice, tone, voice quality and resonance were made by tonsillectomy. Sometimes, any patients taken tonsillectomy has suffer from these voice problem after tonsillectomy. However there are less study for these problems until now. Then, we studied to find the anatomical findings that affected the voice quality when tonsillectomy was performed. We evaluated the voice in 2 groups, one is the group showed the normal pharyngeal space by using the transnasal fiberscopy, the other is group showed medially bulging tonsil at pharyngeal cavity by using same method, with perceptual evaluation, nasalance score, nasality, oral formant and nasal formant. We used the computerized speech analysis system, the nasometer and the spectrogram in the CSL program. We could not find any differences in perceptual evaluation between two groups. But objective measures were provided. Nasalance score and nasality on the nasometric analysis were increased significantly and oral formant on the spectrogram was changed singnificantly after tonsillectomy in Group 2. Authors thought medially bulging tonsil in the pharynx is able to affect the voice quality after tonsillectomy when we evaluted through the nasal cavity by the using of fiberscopy and this evaluation would be important especially in singers.

  • PDF

An Experimental Study of Korean Dialectal Speech (한국어 방언 음성의 실험적 연구)

  • Kim, Hyun-Gi;Choi, Young-Sook;Kim, Deok-Su
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.49-65
    • /
    • 2006
  • Recently, several theories on the digital speech signal processing expanded the communication boundary between human beings and machines drastically. The aim of this study is to collect dialectal speech in Korea on a large scale and to establish a digital speech data base in order to provide the data base for further research on the Korean dialectal and the creation of value-added network. 528 informants across the country participated in this study. Acoustic characteristics of vowels and consonants are analyzed by Power spectrum and Spectrogram of CSL. Test words were made on the picture cards and letter cards which contained each vowel and each consonant in the initial position of words. Plot formants were depicted on a vowel chart and transitions of diphthongs were compared according to dialectal speech. Spectral times, VOT, VD, and TD were measured on a Spectrogram for stop consonants, and fricative frequency, intensity, and lateral formants (LF1, LF2, LF3) for fricative consonants. Nasal formants (NF1, NF2, NF3) were analyzed for different nasalities of nasal consonants. The acoustic characteristics of dialectal speech showed that young generation speakers did not show distinction between close-mid /e/ and open-mid$/\epsilon/$. The diphthongs /we/ and /wj/ showed simple vowels or diphthongs depending to dialect speech. The sibilant sound /s/ showed the aspiration preceded to fricative noise. Lateral /l/ realized variant /r/ in Kyungsang dialectal speech. The duration of nasal consonants in Chungchong dialectal speech were the longest among the dialects.

  • PDF

Implementation of Cough Detection System Using IoT Sensor in Respirator

  • Shin, Woochang
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.132-138
    • /
    • 2020
  • Worldwide, the number of corona virus disease 2019 (COVID-19) confirmed cases is rapidly increasing. Although vaccines and treatments for COVID-19 are being developed, the disease is unlikely to disappear completely. By attaching a smart sensor to the respirator worn by medical staff, Internet of Things (IoT) technology and artificial intelligence (AI) technology can be used to automatically detect the medical staff's infection symptoms. In the case of medical staff showing symptoms of the disease, appropriate medical treatment can be provided to protect the staff from the greater risk. In this study, we design and develop a system that detects cough, a typical symptom of respiratory infectious diseases, by applying IoT technology and artificial technology to respiratory protection. Because the cough sound is distorted within the respirator, it is difficult to guarantee accuracy in the AI model learned from the general cough sound. Therefore, coughing and non-coughing sounds were recorded using a sensor attached to a respirator, and AI models were trained and performance evaluated with this data. Mel-spectrogram conversion method was used to efficiently classify sound data, and the developed cough recognition system had a sensitivity of 95.12% and a specificity of 100%, and an overall accuracy of 97.94%.

Proposal of a new method for learning of diesel generator sounds and detecting abnormal sounds using an unsupervised deep learning algorithm

  • Hweon-Ki Jo;Song-Hyun Kim;Chang-Lak Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.506-515
    • /
    • 2023
  • This study is to find a method to learn engine sound after the start-up of a diesel generator installed in nuclear power plant with an unsupervised deep learning algorithm (CNN autoencoder) and a new method to predict the failure of a diesel generator using it. In order to learn the sound of a diesel generator with a deep learning algorithm, sound data recorded before and after the start-up of two diesel generators was used. The sound data of 20 min and 2 h were cut into 7 s, and the split sound was converted into a spectrogram image. 1200 and 7200 spectrogram images were created from sound data of 20 min and 2 h, respectively. Using two different deep learning algorithms (CNN autoencoder and binary classification), it was investigated whether the diesel generator post-start sounds were learned as normal. It was possible to accurately determine the post-start sounds as normal and the pre-start sounds as abnormal. It was also confirmed that the deep learning algorithm could detect the virtual abnormal sounds created by mixing the unusual sounds with the post-start sounds. This study showed that the unsupervised anomaly detection algorithm has a good accuracy increased about 3% with comparing to the binary classification algorithm.

Preprocessing performance of convolutional neural networks according to characteristic of underwater targets (수중 표적 분류를 위한 합성곱 신경망의 전처리 성능 비교)

  • Kyung-Min, Park;Dooyoung, Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.629-636
    • /
    • 2022
  • We present a preprocessing method for an underwater target detection model based on a convolutional neural network. The acoustic characteristics of the ship show ambiguous expression due to the strong signal power of the low frequency. To solve this problem, we combine feature preprocessing methods with various feature scaling methods and spectrogram methods. Define a simple convolutional neural network model and train it to measure preprocessing performance. Through experiment, we found that the combination of log Mel-spectrogram and standardization and robust scaling methods gave the best classification performance.

Infant cry recognition using a deep transfer learning method (딥 트랜스퍼 러닝 기반의 아기 울음소리 식별)

  • Bo, Zhao;Lee, Jonguk;Atif, Othmane;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.971-974
    • /
    • 2020
  • Infants express their physical and emotional needs to the outside world mainly through crying. However, most of parents find it challenging to understand the reason behind their babies' cries. Failure to correctly understand the cause of a baby' cry and take appropriate actions can affect the cognitive and motor development of newborns undergoing rapid brain development. In this paper, we propose an infant cry recognition system based on deep transfer learning to help parents identify crying babies' needs the same way a specialist would. The proposed system works by transforming the waveform of the cry signal into log-mel spectrogram, then uses the VGGish model pre-trained on AudioSet to extract a 128-dimensional feature vector from the spectrogram. Finally, a softmax function is used to classify the extracted feature vector and recognize the corresponding type of cry. The experimental results show that our method achieves a good performance exceeding 0.96 in precision and recall, and f1-score.