• Title/Summary/Keyword: 스펙트로그램

Search Result 137, Processing Time 0.027 seconds

A study on data augmentation methods for sound data classification (소리 데이터 분류에 대한 데이터 증대 방법 연구)

  • Chang, Il-Sik;Park, Goo-man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1308-1310
    • /
    • 2022
  • 소리 데이터 분류는 단순 소리를 통한 분류, 감정 인식등 다양한 연구가 진행중이다. 심층 신경망에서 데이터의 부족과 과적합 문제를 개선하는 방법으로 데이터 증강은 중요하다. 본 논문에서는 3가지의 소리데이터(UrbanSound8K, RAVDESS, IRMAS)를 사용하였으며, 소리데이터는 멜 스펙트로그램을 통한 변환과정을 거쳐 네트워크 망에 입력된다. 입력된 신호는 다양한 네크워크 신경망(Bidirection LSTM, Bidirection LSTM Attention, Multi-Head Attention, CNN)을 통해 학습되어지며, 각각의 네트워크 신경망에서 데이터 증강 전후의 분류 정확도를 확인 하였다. 다양한 데이터셋과 다양한 네트워크 망에서의 데이터 증강 방법의 결과 비교를 통한 통찰을 얻을수 있을 것이다.

  • PDF

A Multi-band Loss Function for Improving Time-Domain Autoencoder (시간 영역 오토인코더의 성능 개선을 위한 다중 대역 손실 함수)

  • Lim, Yujin;Yu, Jeongchan;Seo, Eunmi;Park, Hochong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.78-79
    • /
    • 2021
  • 본 논문에서는 시간 영역 오토인코더의 성능 개선을 위한 다중 대역 손실 함수를 제안한다. 기존의 시간 영역 오토인코더를 사용하는 압축 및 복원 모델은 저 대역 손실에 치중되어 고 대역 신호를 생성하지 못하고 다운 샘플링된 신호를 결과로 출력하는 문제점을 가진다. 이를 해결하기 위해 대역별로 손실을 분리하여 가중치를 조절할 수 있는 다중 대역 손실 함수를 제안한다. 제안하는 손실 함수가 적용된 오토인코더에 음성 신호를 입력하여 학습을 진행한 결과, 다운 샘플링이 발생하지 않으며 고 대역 신호가 복원되는 것을 스펙트로그램을 통해 확인하였다.

  • PDF

Listenable Explanation for Heatmap in Acoustic Scene Classification (음향 장면 분류에서 히트맵 청취 분석)

  • Suh, Sangwon;Park, Sooyoung;Jeong, Youngho;Lee, Taejin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.727-731
    • /
    • 2020
  • 인공신경망의 예측 결과에 대한 원인을 분석하는 것은 모델을 신뢰하기 위해 필요한 작업이다. 이에 컴퓨터 비전 분야에서는 돌출맵 또는 히트맵의 형태로 모델이 어떤 내용을 근거로 예측했는지 시각화 하는 모델 해석 방법들이 제안되었다. 하지만 오디오 분야에서는 스펙트로그램 상의 시각적 해석이 직관적이지 않으며, 실제 어떤 소리를 근거로 판단했는지 이해하기 어렵다. 따라서 본 연구에서는 히트맵의 청취 분석 시스템을 제안하고, 이를 활용한 음향 장면 분류 모델의 히트맵 청취 분석 실험을 진행하여 인공신경망의 예측 결과에 대해 사람이 이해할 수 있는 설명을 제공할 수 있는지 확인한다.

  • PDF

Arrhythmia classification based on meta-transfer learning using 2D-CNN model (2D-CNN 모델을 이용한 메타-전이학습 기반 부정맥 분류)

  • Kim, Ahyun;Yeom, Sunhwoong;Kim, Kyungbaek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.550-552
    • /
    • 2022
  • 최근 사물인터넷(IoT) 기기가 활성화됨에 따라 웨어러블 장치 환경에서 장기간 모니터링 및 수집이 가능해짐에 따라 생체 신호 처리 및 ECG 분석 연구가 활성화되고 있다. 그러나, ECG 데이터는 부정맥 비트의 불규칙적인 발생으로 인한 클래스 불균형 문제와 근육의 떨림 및 신호의 미약등과 같은 잡음으로 인해 낮은 신호 품질이 발생할 수 있으며 훈련용 공개데이터 세트가 작다는 특징을 갖는다. 이 논문에서는 ECG 1D 신호를 2D 스펙트로그램 이미지로 변환하여 잡음의 영향을 최소화하고 전이학습과 메타학습의 장점을 결합하여 클래스 불균형 문제와 소수의 데이터에서도 빠른 학습이 가능하다는 특징을 갖는다. 따라서, 이 논문에서는 ECG 스펙트럼 이미지를 사용하여 2D-CNN 메타-전이 학습 기반 부정맥 분류 기법을 제안한다.

Orthographic Influence in the Perception and Production of English Intervocalic Consonants: A Pilot Study (영어 모음사이 자음의 인지와 발화에서 철자의 영향: 파일럿 연구)

  • Cho, Mi-Hui;Chung, Ju-Yeon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.12
    • /
    • pp.459-466
    • /
    • 2009
  • While Korean allows the same consonants at the coda of the preceding syllable and at the onset of the following syllable, English does not allow the geminate consonants in the same intervocalic position. Due to this difference between Korean and English, Korean learners of English tend to incorrectly produce geminate consonants for English geminate graphemes as in $su\underline{mm}er$. Based on this observation, a pilot study was designed to investigate how Korean learners of English perceive and produce English doubleton graphemes and singleton graphemes. Twenty Korean college students were asked to perform a forced-choice perception test as well as a production test for the 36 real word stimuli which consist of (near) minimal pairs of singleton and doubleton graphemes. The result showed that the accuracy rates for the words with singleton graphemes were higher than those for the words with doubleton graphemes both in perception and production because the subjects misperceived and misproduced the doubleton graphemes as geminates due to orthographic influence. In addition, the low error rates of the word with voiced stops were accounted for by Korean language transfer. Further, spectrographic analyses were provided where more production errors were witnessed in doubleton grapheme words than singleton grapheme words. Finally, pedagogical implications are provided.

Differentiation of Adductor-Type Spasmodic Dysphonia from Muscle Tension Dysphonia Using Spectrogram (스펙트로그램을 이용한 내전형 연축성 발성 장애와 근긴장성 발성 장애의 감별)

  • Noh, Seung Ho;Kim, So Yean;Cho, Jae Kyung;Lee, Sang Hyuk;Jin, Sung Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.28 no.2
    • /
    • pp.100-105
    • /
    • 2017
  • Background and Objectives : Adductor type spasmodic dysphonia (ADSD) is neurogenic disorder and focal laryngeal dystonia, while muscle tension dysphonia (MTD) is caused by functional voice disorder. Both ADSD and MTD may be associated with excessive supraglottic contraction and compensation, resulting in a strained voice quality with spastic voice breaks. The aim of this study was to determine the utility of spectrogram analysis in the differentiation of ADSD from MTD. Materials and Methods : From 2015 through 2017, 17 patients of ADSD and 20 of MTD, underwent acoustic recording and phonatory function studies, were enrolled. Jitter (frequency perturbation), Shimmer (amplitude perturbation) were obtained using MDVP (Multi-dimensional Voice Program) and GRBAS scale was used for perceptual evaluation. The two speech therapist evaluated a wide band (11,250 Hz) spectrogram by blind test using 4 scales (0-3 point) for four spectral findings, abrupt voice breaks, irregular wide spaced vertical striations, well defined formants and high frequency spectral noise. Results : Jitter, Shimmer and GRBAS were not found different between two groups with no significant correlation (p>0.05). Abrupt voice breaks and irregular wide spaced vertical striations of ADSD were significantly higher than those of MTD with strong correlation (p<0.01). High frequency spectral noise of MTD were higher than those of ADSD with strong correlation (p<0.01). Well defined formants were not found different between two groups. Conclusion : The wide band spectrograms provided visual perceptual information can differentiate ADSD from MTD. Spectrogram analysis is a useful diagnostic tool for differentiating ADSD from MTD where perceptual analysis and clinical evaluation alone are insufficient.

  • PDF

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

Design and Implementation of BNN based Human Identification and Motion Classification System Using CW Radar (연속파 레이다를 활용한 이진 신경망 기반 사람 식별 및 동작 분류 시스템 설계 및 구현)

  • Kim, Kyeong-min;Kim, Seong-jin;NamKoong, Ho-jung;Jung, Yun-ho
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.211-218
    • /
    • 2022
  • Continuous wave (CW) radar has the advantage of reliability and accuracy compared to other sensors such as camera and lidar. In addition, binarized neural network (BNN) has a characteristic that dramatically reduces memory usage and complexity compared to other deep learning networks. Therefore, this paper proposes binarized neural network based human identification and motion classification system using CW radar. After receiving a signal from CW radar, a spectrogram is generated through a short-time Fourier transform (STFT). Based on this spectrogram, we propose an algorithm that detects whether a person approaches a radar. Also, we designed an optimized BNN model that can support the accuracy of 90.0% for human identification and 98.3% for motion classification. In order to accelerate BNN operation, we designed BNN hardware accelerator on field programmable gate array (FPGA). The accelerator was implemented with 1,030 logics, 836 registers, and 334.904 Kbit block memory, and it was confirmed that the real-time operation was possible with a total calculation time of 6 ms from inference to transferring result.

A Comparative Study of the Speech Signal Parameters for the Consonants of Pyongyang and Seoul Dialects - Focused on "ㅅ/ㅆ" (평양 지역어와 서울 지역어의 자음에 대한 음성신호 파라미터들의 비교 연구 - "ㅅ/ ㅆ"을 중심으로)

  • So, Shin-Ae;Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.6
    • /
    • pp.927-937
    • /
    • 2018
  • In this paper the comparative study of the consonants of Pyongyang and Seoul dialects of Korean is performed from the perspective of the signal processing which can be regarded as the basis of engineering applications. Until today, the most of speech signal studies were primarily focused on the vowels which are playing important role in the language evolution. In any language, however, the number of consonants is greater than the number of vowels. Therefore, the research of consonants is also important. In this paper, with the vowel study of the Pyongyang dialect, which was conducted by phonological research and experimental phonetic methods, the consonant studies are processed based on an engineering operation. The alveolar consonant, which has demonstrated many differences in the phonetic value between Pyongyang and Seoul dialects, was used as the experimental data. The major parameters of the speech signal analysis - formant frequency, pitch, spectrogram - are measured. The phonetic values between the two dialects were compared with respect to /시/ and /씨/ of Korean language. This study can be used as the basis for the voice recognition and the voice synthesis in the future.