• Title/Summary/Keyword: Filter-Bank

Search Result 355, Processing Time 0.02 seconds

Performance Improvement of Cardiac Disorder Classification Based on Automatic Segmentation and Extreme Learning Machine (자동 분할과 ELM을 이용한 심장질환 분류 성능 개선)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.32-43
    • /
    • 2009
  • In this paper, we improve the performance of cardiac disorder classification by continuous heart sound signals using automatic segmentation and extreme learning machine (ELM). The accuracy of the conventional cardiac disorder classification systems degrades because murmurs and click sounds contained in the abnormal heart sound signals cause incorrect or missing starting points of the first (S1) and the second heart pulses (S2) in the automatic segmentation stage, In order to reduce the performance degradation due to segmentation errors, we find the positions of the S1 and S2 pulses, modify them using the time difference of S1 or S2, and extract a single period of heart sound signals. We then obtain a feature vector consisting of the mel-scaled filter bank energy coefficients and the envelope of uniform-sized sub-segments from the single-period heart sound signals. To classify the heart disorders, we use ELM with a single hidden layer. In cardiac disorder classification experiments with 9 cardiac disorder categories, the proposed method shows the classification accuracy of 81.6% and achieves the highest classification accuracy among ELM, multi-layer perceptron (MLP), support vector machine (SVM), and hidden Markov model (HMM).

A New Wideband Speech/Audio Coder Interoperable with ITU-T G.729/G.729E (ITU-T G.729/G.729E와 호환성을 갖는 광대역 음성/오디오 부호화기)

  • Kim, Kyung-Tae;Lee, Min-Ki;Youn, Dae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.81-89
    • /
    • 2008
  • Wideband speech, characterized by a bandwidth of about 7 kHz (50-7000 Hz), provides a substantial quality improvement in terms of naturalness and intelligibility. Although higher data rates are required, it has extended its application to audio and video conferencing, high-quality multimedia communications in mobile links or packet-switched transmissions, and digital AM broadcasting. In this paper, we present a new bandwidth-scalable coder for wideband speech and audio signals. The proposed coder spits 8kHz signal bandwidth into two narrow bands, and different coding schemes are applied to each band. The lower-band signal is coded using the ITU-T G.729/G.729E coder, and the higher-band signal is compressed using a new algorithm based on the gammatone filter bank with an invertible auditory model. Due to the split-band architecture and completely independent coding schemes for each band, the output speech of the decoder can be selected to be a narrowband or wideband according to the channel condition. Subjective tests showed that, for wideband speech and audio signals, the proposed coder at 14.2/18 kbit/s produces superior quality to ITU-T 24 kbit/s G.722.1 with the shorter algorithmic delay.

Using noise filtering and sufficient dimension reduction method on unstructured economic data (노이즈 필터링과 충분차원축소를 이용한 비정형 경제 데이터 활용에 대한 연구)

  • Jae Keun Yoo;Yujin Park;Beomseok Seo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.119-138
    • /
    • 2024
  • Text indicators are increasingly valuable in economic forecasting, but are often hindered by noise and high dimensionality. This study aims to explore post-processing techniques, specifically noise filtering and dimensionality reduction, to normalize text indicators and enhance their utility through empirical analysis. Predictive target variables for the empirical analysis include monthly leading index cyclical variations, BSI (business survey index) All industry sales performance, BSI All industry sales outlook, as well as quarterly real GDP SA (seasonally adjusted) growth rate and real GDP YoY (year-on-year) growth rate. This study explores the Hodrick and Prescott filter, which is widely used in econometrics for noise filtering, and employs sufficient dimension reduction, a nonparametric dimensionality reduction methodology, in conjunction with unstructured text data. The analysis results reveal that noise filtering of text indicators significantly improves predictive accuracy for both monthly and quarterly variables, particularly when the dataset is large. Moreover, this study demonstrated that applying dimensionality reduction further enhances predictive performance. These findings imply that post-processing techniques, such as noise filtering and dimensionality reduction, are crucial for enhancing the utility of text indicators and can contribute to improving the accuracy of economic forecasts.

Compression Sensing Technique for Efficient Structural Health Monitoring - Focusing on Optimization of CAFB and Shaking Table Test Using Kobe Seismic Waveforms (효율적인 SHM을 위한 압축센싱 기술 - Kobe 지진파형을 이용한 CAFB의 최적화 및 지진응답실험 중심으로)

  • Heo, Gwang-Hee;Lee, Chin-Ok;Seo, Sang-Gu;Jeong, Yu-Seung;Jeon, Joon-Ryong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.24 no.2
    • /
    • pp.23-32
    • /
    • 2020
  • The compression sensing technology, CAFB, was developed to obtain the raw signal of the target structure by compressing it into a signal of the intended frequency range. At this point, for compression sensing, the CAFB can be optimized for various reference signals depending on the desired frequency range of the target structure. In addition, optimized CAFB should be able to efficiently compress the effective structural answers of the target structure even in sudden/dangerous conditions such as earthquakes. In this paper, the targeted frequency range for efficient structural integrity monitoring of relatively flexible structures was set below 10Hz, and the optimization method of CAFB for this purpose and the seismic response performance of CAFB in seismic conditions were evaluated experimentally. To this end, in this paper, CAFB was first optimized using Kobe seismic waveform, and embedded it in its own wireless IDAQ system. In addition, seismic response tests were conducted on two span bridges using Kobe seismic waveform. Finally, using an IDAQ system with built-in CAFB, the seismic response of the two-span bridge was wirelessly obtained, and the compression signal obtained was cross-referenced with the raw signal. From the results of the experiment, the compression signal showed excellent response performance and data compression effects in relation to the raw signal, and CAFB was able to effectively compress and sensitize the effective structural response of the structure even in seismic situations. Finally, in this paper, the optimization method of CAFB was presented to suit the intended frequency range (less than 10Hz), and CAFB proved to be an economical and efficient data compression sensing technology for instrumentation-monitoring of seismic conditions.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.