• 제목/요약/키워드: Sound classification

Search Result 300, Processing Time 0.024 seconds

The Noise Influence Assessment according to the Change of the Offset Type Print Machine's Power (옵셋 인쇄기계 동력규모 변화에 따른 소음 영향 평가)

  • Gu, Jinhoi;Kwon, Myunghee;Lee, Wooseok;Lee, Jaewon;Park, Hyungkyu;Kim, Samsu;Yun, Heekyung;Lee, Kyumok;Jung, Daekwan;Seo, Chungyoul
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.24 no.9
    • /
    • pp.682-686
    • /
    • 2014
  • Nowadays, the needs to revise the classification criteria for noise emission facilities have been suggested by the related industries. Because there existed many reasonable factors in the criteria regarding the noise emission facilities. And the noise emission facility classification criterion of the print machine changed from 50 HP to 100 HP in 2013. But the increasement of the noise emission facility classification criterion of the print machine can cause adverse effects like the bigger noise. So, in this paper, we measured the print machine's sound power level according to the changes of the print machine's power to assess the adverse effects. The measurement method applied with KS I ISO 9614-2(1996). The corelation between the sound power level and the power of print machines was analyzed by regression analysis. In this paper, we found that the sound power level of the print machines can increase about 1.3 dB in the condition of that the power of print machine increases from 50 HP to 100 HP. And we found that the sound power level of the print machines can increase about 1.0 dB for a increasement of 1,000 SPH(sheet per hour) of printing speed. The noise emission characteristics of print machine stuied in this paper will be useful to design the noise reduction plan in the future.

Dimension-Reduced Audio Spectrum Projection Features for Classifying Video Sound Clips

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.3E
    • /
    • pp.89-94
    • /
    • 2006
  • For audio indexing and targeted search of specific audio or corresponding visual contents, the MPEG-7 standard has adopted a sound classification framework, in which dimension-reduced Audio Spectrum Projection (ASP) features are used to train continuous hidden Markov models (HMMs) for classification of various sounds. The MPEG-7 employs Principal Component Analysis (PCA) or Independent Component Analysis (ICA) for the dimensional reduction. Other well-established techniques include Non-negative Matrix Factorization (NMF), Linear Discriminant Analysis (LDA) and Discrete Cosine Transformation (DCT). In this paper we compare the performance of different dimensional reduction methods with Gaussian mixture models (GMMs) and HMMs in the classifying video sound clips.

A Survey on Foreign and Domestic Interior Noise Criteria for Walls and Floors (공동주택 내부소음 기준과 바닥 및 벽체 차음성능 기준 고찰)

  • Kim, Sun-Woo;Song, Min-Jeong
    • KIEAE Journal
    • /
    • v.4 no.3
    • /
    • pp.37-44
    • /
    • 2004
  • In this study, foreign and domestic noise criteria on walls, floors, and water supply facilities were reviewed and the results are as follows : regulation can be divided two things, one is on thickness the other is sound insulation performance. Green Building regulation based on the law and this have sound classification systems. Since these regulations are not established in Korea. The noise regulation on water supply-drain facilities and domestic guideline on interior noise level is needed. Foreign regulations are stricter than those of ours. And those has sound classification system for the better acoustic condition of inhabitants'.

SVM-based Drone Sound Recognition using the Combination of HLA and WPT Techniques in Practical Noisy Environment

  • He, Yujing;Ahmad, Ishtiaq;Shi, Lin;Chang, KyungHi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5078-5094
    • /
    • 2019
  • In recent years, the development of drone technologies has promoted the widespread commercial application of drones. However, the ability of drone to carry explosives and other destructive materials may bring serious threats to public safety. In order to reduce these threats from illegal drones, acoustic feature extraction and classification technologies are introduced for drone sound identification. In this paper, we introduce the acoustic feature vector extraction method of harmonic line association (HLA), and subband power feature extraction based on wavelet packet transform (WPT). We propose a feature vector extraction method based on combined HLA and WPT to extract more sophisticated characteristics of sound. Moreover, to identify drone sounds, support vector machine (SVM) classification with the optimized parameter by genetic algorithm (GA) is employed based on the extracted feature vector. Four drones' sounds and other kinds of sounds existing in outdoor environment are used to evaluate the performance of the proposed method. The experimental results show that with the proposed method, identification probability can achieve up to 100 % in trials, and robustness against noise is also significantly improved.

Automatic Tag Classification from Sound Data for Graph-Based Music Recommendation (그래프 기반 음악 추천을 위한 소리 데이터를 통한 태그 자동 분류)

  • Kim, Taejin;Kim, Heechan;Lee, Soowon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.10
    • /
    • pp.399-406
    • /
    • 2021
  • With the steady growth of the content industry, the need for research that automatically recommending content suitable for individual tastes is increasing. In order to improve the accuracy of automatic content recommendation, it is needed to fuse existing recommendation techniques using users' preference history for contents along with recommendation techniques using content metadata or features extracted from the content itself. In this work, we propose a new graph-based music recommendation method which learns an LSTM-based classification model to automatically extract appropriate tagging words from sound data and apply the extracted tagging words together with the users' preferred music lists and music metadata to graph-based music recommendation. Experimental results show that the proposed method outperforms existing recommendation methods in terms of the recommendation accuracy.

Performance assessments of feature vectors and classification algorithms for amphibian sound classification (양서류 울음 소리 식별을 위한 특징 벡터 및 인식 알고리즘 성능 분석)

  • Park, Sangwook;Ko, Kyungdeuk;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.401-406
    • /
    • 2017
  • This paper presents the performance assessment of several key algorithms conducted for amphibian species sound classification. Firstly, 9 target species including endangered species are defined and a database of their sounds is built. For performance assessment, three feature vectors such as MFCC (Mel Frequency Cepstral Coefficient), RCGCC (Robust Compressive Gammachirp filterbank Cepstral Coefficient), and SPCC (Subspace Projection Cepstral Coefficient), and three classifiers such as GMM(Gaussian Mixture Model), SVM(Support Vector Machine), DBN-DNN(Deep Belief Network - Deep Neural Network) are considered. In addition, i-vector based classification system which is widely used for speaker recognition, is used to assess for this task. Experimental results indicate that, SPCC-SVM achieved the best performance with 98.81 % while other methods also attained good performance with above 90 %.

Evaluation of heavy-weight impact sounds generated by impact ball through classification (주파수 특성 분류를 통한 임팩트 볼 중량충격음의 주관적 평가)

  • Kim, Jae-Ho;Lee, Pyoung-Jik;Jeon, Jin-Yong
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.05a
    • /
    • pp.1142-1146
    • /
    • 2007
  • In this studies, subjective evaluation of heavy-weight floor impact sound through classification was conducted. Heavyweight impact sounds generated by an impact ball were recorded through dummy heads in apartment buildings. The recordings were classified according to the frequency characteristics of the floor impact sounds which are influenced by the floor structure with different boundary conditions and composite materials. The characteristics of the floor impact noise were investigated by paired comparison tests and semantic differential tests. Sound sources for auditory experiment were selected based on the actual noise levels with perceptual level differences. The results showed that roughness and fluctuation strength as well as loudness of the heavy-weight impact noise had a major effect on annoyance.

  • PDF

Cardiac Disorder Classification Using Heart Sounds Acquired by a Wireless Electronic Stethoscope (무선 전자청진 심음을 이용한 심장질환 분류)

  • Kwak, Chul;Lee, Yun-Kyung;Kwon, Oh-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.101-102
    • /
    • 2007
  • Heart diseases are critical and should be detected as soon as possible. A stethoscope is a simple device to find cardiac disorder but requires keen experiences in heart sounds. We evaluate a cardiac disorder classifier by using heart sounds recorded by a digital wireless stethoscope developed in this work. The classifier uses hidden Markov models with circular state transition to model the heart sounds. We train the classifier using two kinds of data: One recorded by using our stethoscope and the other sampled from a clean heart sound database. In classification experiments using 165 sound clips, the classifier shows the classification accuracy of 82% in classifying 6 cardiac disorder categories.

  • PDF

A Study on the Improvement of Online Services for Movie Sound Effects: Focusing on the K-Sound Library (영화 효과음원 온라인 서비스 개선방안 연구 : K-Sound Library 를 중심으로)

  • HyunTae Kim;Jung-eun Lee;SeulBi Lee;Geon Kim;Soojung Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.2
    • /
    • pp.49-67
    • /
    • 2023
  • In recent years, the film industry in South Korea has experienced a period of prosperity, evidenced by the numerous awards won at major international film festivals. Furthermore, growing global interest in K-content and the expansion of the OTT industry following the COVID-19 pandemic are providing favorable conditions for the development of the domestic film industry. Sound effects play a crucial role in conveying the atmosphere and emotions of a film, making them an essential element of film production. In response, the Jeonju IT & CT Industry Promotion Agency has been promoting the development of Korean-style sound effects since 2013. Furthermore, the agency launched an online service called the "K-Sound Library," a sound effect archive, in 2021. However, the service has not been widely utilized because of issues with the database's construction and the system's problems. Therefore, this study aims to identify the K-Sound Library's problems through interviews with sound effects specialists about the online service of the first sound effect archive in South Korea. Based on the interviews and analyses of foreign cases, the study suggests ways to improve the search services' usability and the sound effects classification system.

Parallel Network Model of Abnormal Respiratory Sound Classification with Stacking Ensemble

  • Nam, Myung-woo;Choi, Young-Jin;Choi, Hoe-Ryeon;Lee, Hong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.21-31
    • /
    • 2021
  • As the COVID-19 pandemic rapidly changes healthcare around the globe, the need for smart healthcare that allows for remote diagnosis is increasing. The current classification of respiratory diseases cost high and requires a face-to-face visit with a skilled medical professional, thus the pandemic significantly hinders monitoring and early diagnosis. Therefore, the ability to accurately classify and diagnose respiratory sound using deep learning-based AI models is essential to modern medicine as a remote alternative to the current stethoscope. In this study, we propose a deep learning-based respiratory sound classification model using data collected from medical experts. The sound data were preprocessed with BandPassFilter, and the relevant respiratory audio features were extracted with Log-Mel Spectrogram and Mel Frequency Cepstral Coefficient (MFCC). Subsequently, a Parallel CNN network model was trained on these two inputs using stacking ensemble techniques combined with various machine learning classifiers to efficiently classify and detect abnormal respiratory sounds with high accuracy. The model proposed in this paper classified abnormal respiratory sounds with an accuracy of 96.9%, which is approximately 6.1% higher than the classification accuracy of baseline model.