• Title/Summary/Keyword: frequency feature

Search Result 1,045, Processing Time 0.03 seconds

A Weighted Fuzzy Min-Max Neural Network for Pattern Classification (패턴 분류 문제에서 가중치를 고려한 퍼지 최대-최소 신경망)

  • Kim Ho-Joon;Park Hyun-Jung
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.692-702
    • /
    • 2006
  • In this study, a weighted fuzzy min-max (WFMM) neural network model for pattern classification is proposed. The model has a modified structure of FMM neural network in which the weight concept is added to represent the frequency factor of feature values in a learning data set. First we present in this paper a new activation function of the network which is defined as a hyperbox membership function. Then we introduce a new learning algorithm for the model that consists of three kinds of processes: hyperbox creation/expansion, hyperbox overlap test, and hyperbox contraction. A weight adaptation rule considering the frequency factors is defined for the learning process. Finally we describe a feature analysis technique using the proposed model. Four kinds of relevance factors among feature values, feature types, hyperboxes and patterns classes are proposed to analyze relative importance of each feature in a given problem. Two types of practical applications, Fisher's Iris data and Cleveland medical data, have been used for the experiments. Through the experimental results, the effectiveness of the proposed method is discussed.

API Grouping Based Flow Analysis and Frequency Analysis Technique for Android Malware Classification (안드로이드 악성코드 분류를 위한 Flow Analysis 기반의 API 그룹화 및 빈도 분석 기법)

  • Shim, Hyunseok;Park, Jungsoo;Doan, Thien-Phuc;Jung, Souhwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1235-1242
    • /
    • 2019
  • While several machine learning technique has been implemented for Android malware categorization, there is still difficulty in analyzing due to overfitting problem and including of un-executable code, etc. In this paper, we introduce our implemented tool to address these problems. Tool is consists of approximately 1,500 lines of Java code, and perform Flow analysis on set of APIs, or on control flow graph. Our tool groups all the API by its relationship and only perform analysis on actually executing code. Using our tool, we grouped 39032 APIs into 4972 groups, and 12123 groups with result of including class names. We collected 7,000 APKs from 7 families and evaluated our feature reduction technique, and we also reduced features again with selecting APIs that have frequency more than 20%. We finally reduced features to 263-numbers of feature for our collected APKs.

Selecting Good Speech Features for Recognition

  • Lee, Young-Jik;Hwang, Kyu-Woong
    • ETRI Journal
    • /
    • v.18 no.1
    • /
    • pp.29-41
    • /
    • 1996
  • This paper describes a method to select a suitable feature for speech recognition using information theoretic measure. Conventional speech recognition systems heuristically choose a portion of frequency components, cepstrum, mel-cepstrum, energy, and their time differences of speech waveforms as their speech features. However, these systems never have good performance if the selected features are not suitable for speech recognition. Since the recognition rate is the only performance measure of speech recognition system, it is hard to judge how suitable the selected feature is. To solve this problem, it is essential to analyze the feature itself, and measure how good the feature itself is. Good speech features should contain all of the class-related information and as small amount of the class-irrelevant variation as possible. In this paper, we suggest a method to measure the class-related information and the amount of the class-irrelevant variation based on the Shannon's information theory. Using this method, we compare the mel-scaled FFT, cepstrum, mel-cepstrum, and wavelet features of the TIMIT speech data. The result shows that, among these features, the mel-scaled FFT is the best feature for speech recognition based on the proposed measure.

  • PDF

Parts-based Feature Extraction of Speech Spectrum Using Non-Negative Matrix Factorization (Non-Negative Matrix Factorization을 이용한 음성 스펙트럼의 부분 특징 추출)

  • 박정원;김창근;허강인
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.49-52
    • /
    • 2003
  • In this paper, we propose new speech feature parameter using NMf(Non-Negative Matrix Factorization). NMF can represent multi-dimensional data based on effective dimensional reduction through matrix factorization under the non-negativity constraint, and reduced data present parts-based features of input data. In this paper, we verify about usefulness of NMF algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment result, we could confirm that proposal feature parameter is superior in recognition performance than MFCC(mel frequency cepstral coefficient) that is used generally.

  • PDF

Patterns Recognition Using Translation-Invariant Wavelet Transform (위치이동에 무관한 웨이블릿 변환을 이용한 패턴인식)

  • Kim, Kuk-Jin;Cho, Seong-Won;Kim, Jae-Min;Lim, Cheol-Su
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.281-286
    • /
    • 2003
  • Wavelet Transform can effectively represent the local characteristics of a signal in the space-frequency domain. However, the feature vector extracted using wavelet transform is not translation invariant. This paper describes a new feature extraction method using wavelet transform, which is translation-invariant. Based on this translation-invariant feature extraction, the iris recognition method, based on this feature extraction method, is robust to noises. Experimentally, we show that the proposed method produces super performance in iris recognition.

Music Genre Classification Based on Timbral Texture and Rhythmic Content Features

  • Baniya, Babu Kaji;Ghimire, Deepak;Lee, Joonwhon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.204-207
    • /
    • 2013
  • Music genre classification is an essential component for music information retrieval system. There are two important components to be considered for better genre classification, which are audio feature extraction and classifier. This paper incorporates two different kinds of features for genre classification, timbral texture and rhythmic content features. Timbral texture contains several spectral and Mel-frequency Cepstral Coefficient (MFCC) features. Before choosing a timbral feature we explore which feature contributes less significant role on genre discrimination. This facilitates the reduction of feature dimension. For the timbral features up to the 4-th order central moments and the covariance components of mutual features are considered to improve the overall classification result. For the rhythmic content the features extracted from beat histogram are selected. In the paper Extreme Learning Machine (ELM) with bagging is used as classifier for classifying the genres. Based on the proposed feature sets and classifier, experiment is performed with well-known datasets: GTZAN databases with ten different music genres, respectively. The proposed method acquires the better classification accuracy than the existing approaches.

Multi-step wind speed forecasting synergistically using generalized S-transform and improved grey wolf optimizer

  • Ruwei Ma;Zhexuan Zhu;Chunxiang Li;Liyuan Cao
    • Wind and Structures
    • /
    • v.38 no.6
    • /
    • pp.461-475
    • /
    • 2024
  • A reliable wind speed forecasting method is crucial for the applications in wind engineering. In this study, the generalized S-transform (GST) is innovatively applied for wind speed forecasting to uncover the time-frequency characteristics in the non-stationary wind speed data. The improved grey wolf optimizer (IGWO) is employed to optimize the adjustable parameters of GST to obtain the best time-frequency resolution. Then a hybrid method based on IGWO-optimized GST is proposed to validate the effectiveness and superiority for multi-step non-stationary wind speed forecasting. The historical wind speed is chosen as the first input feature, while the dynamic time-frequency characteristics obtained by IGWO-optimized GST are chosen as the second input feature. Comparative experiment with six competitors is conducted to demonstrate the best performance of the proposed method in terms of prediction accuracy and stability. The superiority of the GST compared to other time-frequency analysis methods is also discussed by another experiment. It can be concluded that the introduction of IGWO-optimized GST can deeply exploit the time-frequency characteristics and effectively improving the prediction accuracy.

An Empirical Study on Improving the Performance of Text Categorization Considering the Relationships between Feature Selection Criteria and Weighting Methods (자질 선정 기준과 가중치 할당 방식간의 관계를 고려한 문서 자동분류의 개선에 대한 연구)

  • Lee Jae-Yun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.2
    • /
    • pp.123-146
    • /
    • 2005
  • This study aims to find consistent strategies for feature selection and feature weighting methods, which can improve the effectiveness and efficiency of kNN text classifier. Feature selection criteria and feature weighting methods are as important factor as classification algorithms to achieve good performance of text categorization systems. Most of the former studies chose conflicting strategies for feature selection criteria and weighting methods. In this study, the performance of several feature selection criteria are measured considering the storage space for inverted index records and the classification time. The classification experiments in this study are conducted to examine the performance of IDF as feature selection criteria and the performance of conventional feature selection criteria, e.g. mutual information, as feature weighting methods. The results of these experiments suggest that using those measures which prefer low-frequency features as feature selection criterion and also as feature weighting method. we can increase the classification speed up to three or five times without loosing classification accuracy.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

Emotion recognition from speech using Gammatone auditory filterbank

  • Le, Ba-Vui;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.