• Title/Summary/Keyword: Spectrogram

Search Result 233, Processing Time 0.031 seconds

An Analysis of Timbre Comparison between Jeongak Daegeum and Sanjo Daegeum (정악대금과 산조대금의 음색 특징 분석)

  • Sung, Ki-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.229-236
    • /
    • 2020
  • In this paper, the tone of Daegeum, one of the most representative wind instruments of our country, was analyzed. Daegeum is widely used as Jeongak Daegeum and Sanjo Daegeum, which are played in royal and wind music, and Sanjo Daegeum is mainly played in Sanjo, Sinawi and folk music. The reason why the two pieces of music are being played in different music genres is due to the improvement of the length of the pipe and the location of the finger holes, allowing the Sanjo Daegeum to perform faster than Jeongak Daegeum, apply various techniques, and make the choice of musical instruments harmonized with music by making the difference in tone. For timber analysis of Jeongak Daegeum and Sanjo Daegeum, the composition of the overtones was visually verified through Spectrogram and Spectrum Analizer, in which the results of recordings were recorded by playing octave low, flat, and octave high positions with the same power. From this, Jeongak Daegeum, which is rich in low-pitched sound, harmonizes with solemn music such as royal music, and Sanjo Daegeum, which has a relatively clear high-pitched sound, is well suited to bright music such as solo music.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

SHM data anomaly classification using machine learning strategies: A comparative study

  • Chou, Jau-Yu;Fu, Yuguang;Huang, Shieh-Kung;Chang, Chia-Ming
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.77-91
    • /
    • 2022
  • Various monitoring systems have been implemented in civil infrastructure to ensure structural safety and integrity. In long-term monitoring, these systems generate a large amount of data, where anomalies are not unusual and can pose unique challenges for structural health monitoring applications, such as system identification and damage detection. Therefore, developing efficient techniques is quite essential to recognize the anomalies in monitoring data. In this study, several machine learning techniques are explored and implemented to detect and classify various types of data anomalies. A field dataset, which consists of one month long acceleration data obtained from a long-span cable-stayed bridge in China, is employed to examine the machine learning techniques for automated data anomaly detection. These techniques include the statistic-based pattern recognition network, spectrogram-based convolutional neural network, image-based time history convolutional neural network, image-based time-frequency hybrid convolution neural network (GoogLeNet), and proposed ensemble neural network model. The ensemble model deliberately combines different machine learning models to enhance anomaly classification performance. The results show that all these techniques can successfully detect and classify six types of data anomalies (i.e., missing, minor, outlier, square, trend, drift). Moreover, both image-based time history convolutional neural network and GoogLeNet are further investigated for the capability of autonomous online anomaly classification and found to effectively classify anomalies with decent performance. As seen in comparison with accuracy, the proposed ensemble neural network model outperforms the other three machine learning techniques. This study also evaluates the proposed ensemble neural network model to a blind test dataset. As found in the results, this ensemble model is effective for data anomaly detection and applicable for the signal characteristics changing over time.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

The Edge Computing System for the Detection of Water Usage Activities with Sound Classification (음향 기반 물 사용 활동 감지용 엣지 컴퓨팅 시스템)

  • Seung-Ho Hyun;Youngjoon Chee
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.147-156
    • /
    • 2023
  • Efforts to employ smart home sensors to monitor the indoor activities of elderly single residents have been made to assess the feasibility of a safe and healthy lifestyle. However, the bathroom remains an area of blind spot. In this study, we have developed and evaluated a new edge computer device that can automatically detect water usage activities in the bathroom and record the activity log on a cloud server. Three kinds of sound as flushing, showering, and washing using wash basin generated during water usage were recorded and cut into 1-second scenes. These sound clips were then converted into a 2-dimensional image using MEL-spectrogram. Sound data augmentation techniques were adopted to obtain better learning effect from smaller number of data sets. These techniques, some of which are applied in time domain and others in frequency domain, increased the number of training data set by 30 times. A deep learning model, called CRNN, combining Convolutional Neural Network and Recurrent Neural Network was employed. The edge device was implemented using Raspberry Pi 4 and was equipped with a condenser microphone and amplifier to run the pre-trained model in real-time. The detected activities were recorded as text-based activity logs on a Firebase server. Performance was evaluated in two bathrooms for the three water usage activities, resulting in an accuracy of 96.1% and 88.2%, and F1 Score of 96.1% and 87.8%, respectively. Most of the classification errors were observed in the water sound from washing. In conclusion, this system demonstrates the potential for use in recording the activities as a lifelog of elderly single residents to a cloud server over the long-term.

Adhesive Area Detection System of Single-Lap Joint Using Vibration-Response-Based Nonlinear Transformation Approach for Deep Learning (딥러닝을 이용하여 진동 응답 기반 비선형 변환 접근법을 적용한 단일 랩 조인트의 접착 면적 탐지 시스템)

  • Min-Je Kim;Dong-Yoon Kim;Gil Ho Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.1
    • /
    • pp.57-65
    • /
    • 2023
  • A vibration response-based detection system was used to investigate the adhesive areas of single-lap joints using a nonlinear transformation approach for deep learning. In industry or engineering fields, it is difficult to know the condition of an invisible part within a structure that cannot easily be disassembled and the conditions of adhesive areas of adhesively bonded structures. To address these issues, a detection method was devised that uses nonlinear transformation to determine the adhesive areas of various single-lap-jointed specimens from the vibration response of the reference specimen. In this study, a frequency response function with nonlinear transformation was employed to identify the vibration characteristics, and a virtual spectrogram was used for classification in convolutional neural network based deep learning. Moreover, a vibration experiment, an analytical solution, and a finite-element analysis were performed to verify the developed method with aluminum, carbon fiber composite, and ultra-high-molecular-weight polyethylene specimens.

Classification of bearded seals signal based on convolutional neural network (Convolutional neural network 기법을 이용한 턱수염물범 신호 판별)

  • Kim, Ji Seop;Yoon, Young Geul;Han, Dong-Gyun;La, Hyoung Sul;Choi, Jee Woong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2022
  • Several studies using Convolutional Neural Network (CNN) have been conducted to detect and classify the sounds of marine mammals in underwater acoustic data collected through passive acoustic monitoring. In this study, the possibility of automatic classification of bearded seal sounds was confirmed using a CNN model based on the underwater acoustic spectrogram images collected from August 2017 to August 2018 in East Siberian Sea. When only the clear seal sound was used as training dataset, overfitting due to memorization was occurred. By evaluating the entire training data by replacing some training data with data containing noise, it was confirmed that overfitting was prevented as the model was generalized more than before with accuracy (0.9743), precision (0.9783), recall (0.9520). As a result, the performance of the classification model for bearded seals signal has improved when the noise was included in the training data.

Sources separation of passive sonar array signal using recurrent neural network-based deep neural network with 3-D tensor (3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발)

  • Sangheon Lee;Dongku Jung;Jaesok Yu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.357-363
    • /
    • 2023
  • In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network's success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

A study on improving the performance of the machine-learning based automatic music transcription model by utilizing pitch number information (음고 개수 정보 활용을 통한 기계학습 기반 자동악보전사 모델의 성능 개선 연구)

  • Daeho Lee;Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.207-213
    • /
    • 2024
  • In this paper, we study how to improve the performance of a machine learning-based automatic music transcription model by adding musical information to the input data. Where, the added musical information is information on the number of pitches that occur in each time frame, and which is obtained by counting the number of notes activated in the answer sheet. The obtained information on the number of pitches was used by concatenating it to the log mel-spectrogram, which is the input of the existing model. In this study, we use the automatic music transcription model included the four types of block predicting four types of musical information, we demonstrate that a simple method of adding pitch number information corresponding to the music information to be predicted by each block to the existing input was helpful in training the model. In order to evaluate the performance improvement proceed with an experiment using MIDI Aligned Piano Sounds (MAPS) data, as a result, when using all pitch number information, performance improvement was confirmed by 9.7 % in frame-based F1 score and 21.8 % in note-based F1 score including offset.