• Title/Summary/Keyword: Spectrogram

Search Result 239, Processing Time 0.025 seconds

Intonation Training System (Visual Analysis Tool) and the application of French Intonation for Korean Learners (컴퓨터를 이용한 억양 교육 프로그램 개발 : 프랑스어 억양 교육을 중심으로)

  • Yu, Chang-Kyu;Son, Mi-Ra;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1999
  • This study is concerned with the educational program Visual Analysis Tool (VAT) for sound development for foreign intonation using personal computer. The VAT can run on IBM-PC 386 compatible or higher. It shows the spectrogram, waveform, intensity and the pitch contour. The system can work freely on either waveform zoom in-out or the documentation of measured value. In this paper, intensity and pitch contour information were used. Twelve French sentences were recorded from a French conversational tape. And three Korean participated in this study. They spoke out twelve sentences repeatly and trid to make the same pitch contour - by visually matching their pitcgh contour to the native speaker's. A sentences were recorded again when the participants themselves became familiar with intonation, intensity and pauses. The difference of pitch contour(rising or falling), pitch value, energy, total duration of sentences and the boundary of rhythmic group between native speaker's and theirs before and after training were compared. The results were as following: 1) In a declarative sentence: a native speaker's general pitch contour falls at the end of sentences. But the participant's pitch contours were flat before training. 2) In an interrogative: the native speaker made his pitch contours it rise at the end of sentences with the exception of wh-questions (qu'est-ce que) and a pitch value varied a greath. In the interrogative 'S + V' form sentences, we found the pitch contour rose higher in comparison to other sentences and it varied a great deal. 3) In an exclamatory sentence: the pitch contour looked like a shape of a mountain. But the participants could not make it fall before or after training.

  • PDF

Design of Area-efficient Feature Extractor for Security Surveillance Radar Systems (보안 감시용 레이다 시스템을 위한 면적-효율적인 특징점 추출기 설계)

  • Choi, Yeongung;Lim, Jaehyung;Kim, Geonwoo;Jung, Yunho
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.200-207
    • /
    • 2020
  • In this paper, an area-efficient feature extractor was proposed for security surveillance radar systems and FPGA-based implementation results were presented. In order to reduce the memory requirements, features extracted from Doppler profile for FFT window-size are used, while those extracted from total spectrogram for frame-size are excluded. The proposed feature extractor was design using Verilog-HDL and implemented with Xilinx Zynq-7000 FPGA device. Implementation results show that the proposed design can reduce the logic slice and memory requirements by 58.3% and 98.3%, respectively, compared with the existing research. In addition, security surveillance radar system with the proposed feature extractor was implemented and experiments to classify car, bicycle, human and kickboard were performed. It is confirmed from these experiments that the accuracy of classification is 93.4%.

Voice Conversion using Generative Adversarial Nets conditioned by Phonetic Posterior Grams (Phonetic Posterior Grams에 의해 조건화된 적대적 생성 신경망을 사용한 음성 변환 시스템)

  • Lim, Jin-su;Kang, Cheon-seong;Kim, Dong-Ha;Kim, Kyung-sup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.369-372
    • /
    • 2018
  • This paper suggests non-parallel-voice-conversion network conversing voice between unmapped voice pair as source voice and target voice. Conventional voice conversion researches used learning methods that minimize spectrogram's distance error. Not only these researches have some problem that is lost spectrogram resolution by methods averaging pixels. But also have used parallel data that is hard to collect. This research uses PPGs that is input voice's phonetic data and a GAN learning method to generate more clear voices. To evaluate the suggested method, we conduct MOS test with GMM based Model. We found that the performance is improved compared to the conventional methods.

  • PDF

Principal component analysis based frequency-time feature extraction for seismic wave classification (지진파 분류를 위한 주성분 기반 주파수-시간 특징 추출)

  • Min, Jeongki;Kim, Gwantea;Ku, Bonhwa;Lee, Jimin;Ahn, Jaekwang;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.687-696
    • /
    • 2019
  • Conventional feature of seismic classification focuses on strong seismic classification, while it is not suitable for classifying micro-seismic waves. We propose a feature extraction method based on histogram and Principal Component Analysis (PCA) in frequency-time space suitable for classifying seismic waves including strong, micro, and artificial seismic waves, as well as noise classification. The proposed method essentially employs histogram and PCA based features by concatenating the frequency and time information for binary classification which consist strong-micro-artificial/noise and micro/noise and micro/artificial seismic waves. Based on the recent earthquake data from 2017 to 2018, effectiveness of the proposed feature extraction method is demonstrated by comparing it with existing methods.

Acoustic Analysis of a Jing Based on Drive Point and Blow Strength (징의 타격 위치와 강도에 따른 음향 분석)

  • Cho, Sangjin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.328-334
    • /
    • 2015
  • This paper describes an acoustic analysis of a Jing, Korean percussion instrument, according to different drive point and blow strength, and this analysis is focused on the softening and beat phenomena. Three kinds of blow strength (very strong, strong, and weak) and three locations of drive point (center, up, and right) are applied, and the spectrogram function built in Matlab is utilized to analyzing the softening and beat of target sounds. The stronger blow you drive to the center of the Jing, the more clearly softening is observed. Frequency shifting is increased proportionally to the blow strength and frequency and it is stand out on the harmonics in contrast with that of other partials. Beat of the Jing can be classified into the early beat and late beat. The beats by the outside driven Jing are distributed in wider frequency band than the beats by the center driven Jing. In addition, it is observed that the early beat is affected by few specific partials developed around harmonics for the center driven Jing.

Light weight architecture for acoustic scene classification (음향 장면 분류를 위한 경량화 모형 연구)

  • Lim, Soyoung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.979-993
    • /
    • 2021
  • Acoustic scene classification (ASC) categorizes an audio file based on the environment in which it has been recorded. This has long been studied in the detection and classification of acoustic scenes and events (DCASE). In this study, we considered the problem that ASC faces in real-world applications that the model used should have low-complexity. We compared several models that apply light-weight techniques. First, a base CNN model was proposed using log mel-spectrogram, deltas, and delta-deltas features. Second, depthwise separable convolution, linear bottleneck inverted residual block was applied to the convolutional layer, and Quantization was applied to the models to develop a low-complexity model. The model considering low-complexity was similar or slightly inferior to the performance of the base model, but the model size was significantly reduced from 503 KB to 42.76 KB.

Characteristics of Vowel Formants, Voice Intensity, and Fundamental Frequency of Female with Amyotrophic Lateral Sclerosis using Spectrograms (스펙트로그램을 이용한 근위축성측삭경화증 여성 화자의 모음 포먼트, 음성강도, 기본주파수의 변화)

  • Byeon, Haewon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.193-198
    • /
    • 2019
  • This study analyzed the changes of vowel formant, voice intensity, and fundamental frequency of vowels for 11 months using acoustochemical spectrogram analysis of women diagnosed with amyotrophic lateral sclerosis (ALS). The test word was a vowel /a, i, u/ and a diphthong /h + ja + da/, /h + wi + da/, and /h +ɰi+ da/. Speech data were collected through the word reading task presented on the monitor using 'Alvin' program, and the recording environment was set to 5,500 Hz for the nyquist frequency and 11,000 Hz for the sampling rate. The records were analyzed by using spectrograms to vowel formants, voice intensity, and fundamental frequency. As a result of analysis, the fundamental frequency and intensity of the ALS process were decreased and the formant slope of the diphthong was decreased rather than the formant change in the vowel. This result suggests that the vowel distortion of ALS due to disease progression is due to the decrease of tongue and jaw co morbidity.

A Study on the Classification of Fault Motors using Sound Data (소리 데이터를 이용한 불량 모터 분류에 관한 연구)

  • Il-Sik, Chang;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.885-896
    • /
    • 2022
  • Motor failure in manufacturing plays an important role in future A/S and reliability. Motor failure is detected by measuring sound, current, and vibration. For the data used in this paper, the sound of the car's side mirror motor gear box was used. Motor sound consists of three classes. Sound data is input to the network model through a conversion process through MelSpectrogram. In this paper, various methods were applied, such as data augmentation to improve the performance of classifying fault motors and various methods according to class imbalance were applied resampling, reweighting adjustment, change of loss function and representation learning and classification into two stages. In addition, the curriculum learning method and self-space learning method were compared through a total of five network models such as Bidirectional LSTM Attention, Convolutional Recurrent Neural Network, Multi-Head Attention, Bidirectional Temporal Convolution Network, and Convolution Neural Network, and the optimal configuration was found for motor sound classification.

Shooting sound analysis using convolutional neural networks and long short-term memory (합성곱 신경망과 장단기 메모리를 이용한 사격음 분석 기법)

  • Kang, Se Hyeok;Cho, Ji Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.312-318
    • /
    • 2022
  • This paper proposes a model which classifies the type of guns and information about sound source location using deep neural network. The proposed classification model is composed of convolutional neural networks (CNN) and long short-term memory (LSTM). For training and test the model, we use the Gunshot Audio Forensic Dataset generated by the project supported by the National Institute of Justice (NIJ). The acoustic signals are transformed to Mel-Spectrogram and they are provided as learning and test data for the proposed model. The model is compared with the control model consisting of convolutional neural networks only. The proposed model shows high accuracy more than 90 %.

Acceleration signal-based haptic texture recognition according to characteristics of object surface material using conformer model (Conformer 모델을 이용한 물체 표면 재료의 특성에 따른 가속도 신호 기반 햅틱 질감 인식)

  • Hyoung-Gook Kim;Dong-Ki Jeong;Jin-Young Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.3
    • /
    • pp.214-220
    • /
    • 2023
  • In this paper, we propose a method to improve texture recognition performance from haptic acceleration signals representing the texture characteristics of object surface materials by using a Conformer model that combines the advantages of a convolutional neural network and a transformer. In the proposed method, three-axis acceleration signals generated by impact sound and vibration are combined into one-dimensional acceleration data while a person contacts the surface of the object materials using a tool such as a stylus , and the logarithmic Mel-spectrogram is extracted from the haptic acceleration signal similar to the audio signal. Then, Conformer is applied to the extracted the logarithmic Mel-spectrogram to learn main local and global frequency features in recognizing the texture of various object materials. Experiments on the Lehrstuhl für Medientechnik (LMT) haptic texture dataset consisting of 60 materials to evaluate the performance of the proposed model showed that the proposed method can effectively recognize the texture of the object surface material better than the existing methods.