• Title/Summary/Keyword: 스펙트로그램

Search Result 135, Processing Time 0.027 seconds

Tempo-oriented music recommendation system based on human activity recognition using accelerometer and gyroscope data (가속도계와 자이로스코프 데이터를 사용한 인간 행동 인식 기반의 템포 지향 음악 추천 시스템)

  • Shin, Seung-Su;Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.286-291
    • /
    • 2020
  • In this paper, we propose a system that recommends music through tempo-oriented music classification and sensor-based human activity recognition. The proposed method indexes music files using tempo-oriented music classification and recommends suitable music according to the recognized user's activity. For accurate music classification, a dynamic classification based on a modulation spectrum and a sequence classification based on a Mel-spectrogram are used in combination. In addition, simple accelerometer and gyroscope sensor data of the smartphone are applied to deep spiking neural networks to improve activity recognition performance. Finally, music recommendation is performed through a mapping table considering the relationship between the recognized activity and the indexed music file. The experimental results show that the proposed system is suitable for use in any practical mobile device with a music player.

Variation Analysis of Spectrogram for Indicators Design of Musicality Evaluation (음악성 평가 지표 설계를 위한 성도 모양의 변화 분석)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.8
    • /
    • pp.2110-2116
    • /
    • 2009
  • The culture industry very have interested in modern society so that it is a field to be provided opportunity to can benefits of life with health, medical industry. Especially, music industry to have based on popular support has acknowledged as artistic value to can easily approach that expresses a feeling to exist together with popularity, originality. In this paper, we will want to design indicators to evaluate a singer's musical talent to can speak a key part in these music industry. From this, we applied analysis elements of spectrogram to perform in change of vocal tract shape in singer's voice and public voice about identical music, and performed comparison, analysis of two groups to experiment pattern analysis of result waveform. Therefore, we analyzed pattern in change of vocal tract shape choice a popular music using of experiment to collect singer and public voice about identical part with time so that we designed indicator to can evaluate musicality.

Deep Learning Music genre automatic classification voting system using Softmax (소프트맥스를 이용한 딥러닝 음악장르 자동구분 투표 시스템)

  • Bae, June;Kim, Jangyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.27-32
    • /
    • 2019
  • Research that implements the classification process through Deep Learning algorithm, one of the outstanding human abilities, includes a unimodal model, a multi-modal model, and a multi-modal method using music videos. In this study, the results were better by suggesting a system to analyze each song's spectrum into short samples and vote for the results. Among Deep Learning algorithms, CNN showed superior performance in the category of music genre compared to RNN, and improved performance when CNN and RNN were applied together. The system of voting for each CNN result by Deep Learning a short sample of music showed better results than the previous model and the model with Softmax layer added to the model performed best. The need for the explosive growth of digital media and the automatic classification of music genres in numerous streaming services is increasing. Future research will need to reduce the proportion of undifferentiated songs and develop algorithms for the last category classification of undivided songs.

A study on the target detection method of the continuous-wave active sonar in reverberation based on beamspace-domain multichannel nonnegative matrix factorization (빔공간 다채널 비음수 행렬 분해에 기초한 잔향에서의 지속파 능동 소나 표적 탐지 기법에 대한 연구)

  • Lee, Seokjin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.489-498
    • /
    • 2018
  • In this paper, a target detection method based on beamspace-domain multichannel nonnegative matrix factorization is studied when an echo of continuous-wave ping is received from a low-Doppler target in reverberant environment. If the receiver of the continuous-wave active sonar moves, the frequency range of the reverberation is broadened due to the Doppler effect, so the low-Doppler target echo is interfered by the reverberation in this case. The developed algorithm analyzes the multichannel spectrogram of the received signal into frequency bases, time bases, and beamformer gains using the beamspace-domain multichannel nonnnegative matrix factorization, then the algorithm estimates the frequency, time, and bearing of target echo by choosing a proper basis. To analyze the performance of the developed algorithm, simulations were performed in various signal-to-reverberation conditions. The results show that the proposed algorithm can estimate the frequency, time, and bearing, but the performance was degraded in the low signal-to-reverberation condition. It is expected that modifying the selection algorithm of the target echo basis can enhance the performance according to the simulation results.

Speech enhancement system using the multi-band coherence function and spectral subtraction method (다중 주파수 밴드 간섭함수와 스펙트럼 차감법을 이용한 음성 향상 시스템)

  • Oh, Inkyu;Lee, Insung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.406-413
    • /
    • 2019
  • This paper proposes a speech enhancement method through the process of combining the gain function with spectrum subtraction method in the two microphone array with close spacing. A speech enhancement method that uses a gain function estimated by the SNR (Signal-to Noise Ratio) based on the multi frequency band coherence function causes the performance degradation in high correlation between input noises of two channels. A new speech enhancement method is proposed where the weighted gain function is used by combining the gain function from the spectral subtraction. The performance evaluation of the proposed method was shown by comparison with PESQ (Perceptual Evaluation of Speech Quality) value which is an objective quality evaluation test provided by the ITU-T (International Telecommunications Union Telecommunication). In the PESQ tests, the maximum 0.217 of PESQ value is improved in the various background noise environments.

Autoencoder-based signal modulation and demodulation method for sonobuoy signal transmission and reception (소노부이 신호 송수신을 위한 오토인코더 기반 신호 변복조 기법)

  • Park, Jinuk;Seok, Jongwon;Hong, Jungpyo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.4
    • /
    • pp.461-467
    • /
    • 2022
  • Sonobuoy is a disposable device that collects underwater acoustic information and is designed to transmit signals collected in a particular area to nearby aircraft or ships and sink to the seabed upon completion of its mission. In a conventional sonobouy signal transmission and reception system, collected signals are modulated and transmitted using techniques such as frequency division modulation or Gaussian frequency shift keying, and received and demodulated by an aircraft or a ship. However, this method has the disadvantage of the large amount of information to be transmitted and low security due to relatively simple modulation and demodulation methods. Therefore, in this paper, we propose a method that uses an autoencoder to encode a transmission signal into a low-dimensional latent vector to transmit the latent vector to an aircraft or ship and decode the received latent vector to improve signal security and to reduce the amount of transmission information by approximately a factor of a hundred compared to the conventional method. As a result of confirming the sample spectrogram reconstructed by the proposed method through simulation, it was confirmed that the original signal could be restored from a low-dimensional latent vector.

Shooting sound analysis using convolutional neural networks and long short-term memory (합성곱 신경망과 장단기 메모리를 이용한 사격음 분석 기법)

  • Kang, Se Hyeok;Cho, Ji Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.312-318
    • /
    • 2022
  • This paper proposes a model which classifies the type of guns and information about sound source location using deep neural network. The proposed classification model is composed of convolutional neural networks (CNN) and long short-term memory (LSTM). For training and test the model, we use the Gunshot Audio Forensic Dataset generated by the project supported by the National Institute of Justice (NIJ). The acoustic signals are transformed to Mel-Spectrogram and they are provided as learning and test data for the proposed model. The model is compared with the control model consisting of convolutional neural networks only. The proposed model shows high accuracy more than 90 %.

Adhesive Area Detection System of Single-Lap Joint Using Vibration-Response-Based Nonlinear Transformation Approach for Deep Learning (딥러닝을 이용하여 진동 응답 기반 비선형 변환 접근법을 적용한 단일 랩 조인트의 접착 면적 탐지 시스템)

  • Min-Je Kim;Dong-Yoon Kim;Gil Ho Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.57-65
    • /
    • 2023
  • A vibration response-based detection system was used to investigate the adhesive areas of single-lap joints using a nonlinear transformation approach for deep learning. In industry or engineering fields, it is difficult to know the condition of an invisible part within a structure that cannot easily be disassembled and the conditions of adhesive areas of adhesively bonded structures. To address these issues, a detection method was devised that uses nonlinear transformation to determine the adhesive areas of various single-lap-jointed specimens from the vibration response of the reference specimen. In this study, a frequency response function with nonlinear transformation was employed to identify the vibration characteristics, and a virtual spectrogram was used for classification in convolutional neural network based deep learning. Moreover, a vibration experiment, an analytical solution, and a finite-element analysis were performed to verify the developed method with aluminum, carbon fiber composite, and ultra-high-molecular-weight polyethylene specimens.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

A study on improving the performance of the machine-learning based automatic music transcription model by utilizing pitch number information (음고 개수 정보 활용을 통한 기계학습 기반 자동악보전사 모델의 성능 개선 연구)

  • Daeho Lee;Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.207-213
    • /
    • 2024
  • In this paper, we study how to improve the performance of a machine learning-based automatic music transcription model by adding musical information to the input data. Where, the added musical information is information on the number of pitches that occur in each time frame, and which is obtained by counting the number of notes activated in the answer sheet. The obtained information on the number of pitches was used by concatenating it to the log mel-spectrogram, which is the input of the existing model. In this study, we use the automatic music transcription model included the four types of block predicting four types of musical information, we demonstrate that a simple method of adding pitch number information corresponding to the music information to be predicted by each block to the existing input was helpful in training the model. In order to evaluate the performance improvement proceed with an experiment using MIDI Aligned Piano Sounds (MAPS) data, as a result, when using all pitch number information, performance improvement was confirmed by 9.7 % in frame-based F1 score and 21.8 % in note-based F1 score including offset.