• Title/Summary/Keyword: Pearson Feature Selection

Search Result 11, Processing Time 0.024 seconds

A Feature Set Selection Approach Based on Pearson Correlation Coefficient for Real Time Attack Detection (실시간 공격 탐지를 위한 Pearson 상관계수 기반 특징 집합 선택 방법)

  • Kang, Seung-Ho;Jeong, In-Seon;Lim, Hyeong-Seok
    • Convergence Security Journal
    • /
    • v.18 no.5_1
    • /
    • pp.59-66
    • /
    • 2018
  • The performance of a network intrusion detection system using the machine learning method depends heavily on the composition and the size of the feature set. The detection accuracy, such as the detection rate or the false positive rate, of the system relies on the feature composition. And the time it takes to train and detect depends on the size of the feature set. Therefore, in order to enable the system to detect intrusions in real-time, the feature set to beused should have a small size as well as an appropriate composition. In this paper, we show that the size of the feature set can be further reduced without decreasing the detection rate through using Pearson correlation coefficient between features along with the multi-objective genetic algorithm which was used to shorten the size of the feature set in previous work. For the evaluation of the proposed method, the experiments to classify 10 kinds of attacks and benign traffic are performed against NSL_KDD data set.

  • PDF

Performance Improvement of Freight Logistics Hub Selection in Thailand by Coordinated Simulation and AHP

  • Wanitwattanakosol, Jirapat;Holimchayachotikul, Pongsak;Nimsrikul, Phatchari;Sopadang, Apichat
    • Industrial Engineering and Management Systems
    • /
    • v.9 no.2
    • /
    • pp.88-96
    • /
    • 2010
  • This paper presents a two-phase quantitative framework to aid the decision making process for effective selection of an efficient freight logistics hub from 8 alternatives in Thailand on the North-South economic corridor. Phase 1 employs both multiple regression and Pearson Feature selection to find the important criteria, as defined by logistics hub score, and to reduce number of criteria by eliminating the less important criteria. The result of Pearson Feature selection indicated that only 5 of 15 criteria affected the logistics hub score. Moreover, Genetic Algorithm (GA) was constructed from original 15 criteria data set to find the relationship between logistics criteria and freight logistics hub score. As a result, the statistical tools are provided the same 5 important criteria, affecting logistics hub score from GA, and data mining tool. Phase 2 performs the fuzzy stochastic AHP analysis with the five important criteria. This approach could help to gain insight into how the imprecision in judgment ratios may affect their alternatives toward the best solution and how the best alternative may be identified with certain confidence. The main objective of the paper is to find the best alternative for selecting freight logistics hub under proper criteria. The experimental results show that by using this approach, Chiang Mai province is the best place with the confidence interval 95%.

Feature Selection for Classification of Mass Spectrometric Proteomic Data Using Random Forest (단백체 스펙트럼 데이터의 분류를 위한 랜덤 포리스트 기반 특성 선택 알고리즘)

  • Ohn, Syng-Yup;Chi, Seung-Do;Han, Mi-Young
    • Journal of the Korea Society for Simulation
    • /
    • v.22 no.4
    • /
    • pp.139-147
    • /
    • 2013
  • This paper proposes a novel method for feature selection for mass spectrometric proteomic data based on Random Forest. The method includes an effective preprocessing step to filter a large amount of redundant features with high correlation and applies a tournament strategy to get an optimal feature subset. Experiments on three public datasets, Ovarian 4-3-02, Ovarian 7-8-02 and Prostate shows that the new method achieves high performance comparing with widely used methods and balanced rate of specificity and sensitivity.

Compositional Feature Selection and Its Effects on Bandgap Prediction by Machine Learning (기계학습을 이용한 밴드갭 예측과 소재의 조성기반 특성인자의 효과)

  • Chunghee Nam
    • Korean Journal of Materials Research
    • /
    • v.33 no.4
    • /
    • pp.164-174
    • /
    • 2023
  • The bandgap characteristics of semiconductor materials are an important factor when utilizing semiconductor materials for various applications. In this study, based on data provided by AFLOW (Automatic-FLOW for Materials Discovery), the bandgap of a semiconductor material was predicted using only the material's compositional features. The compositional features were generated using the python module of 'Pymatgen' and 'Matminer'. Pearson's correlation coefficients (PCC) between the compositional features were calculated and those with a correlation coefficient value larger than 0.95 were removed in order to avoid overfitting. The bandgap prediction performance was compared using the metrics of R2 score and root-mean-squared error. By predicting the bandgap with randomforest and xgboost as representatives of the ensemble algorithm, it was found that xgboost gave better results after cross-validation and hyper-parameter tuning. To investigate the effect of compositional feature selection on the bandgap prediction of the machine learning model, the prediction performance was studied according to the number of features based on feature importance methods. It was found that there were no significant changes in prediction performance beyond the appropriate feature. Furthermore, artificial neural networks were employed to compare the prediction performance by adjusting the number of features guided by the PCC values, resulting in the best R2 score of 0.811. By comparing and analyzing the bandgap distribution and prediction performance according to the material group containing specific elements (F, N, Yb, Eu, Zn, B, Si, Ge, Fe Al), various information for material design was obtained.

A Study on the prediction of BMI(Benthic Macroinvertebrate Index) using Machine Learning Based CFS(Correlation-based Feature Selection) and Random Forest Model (머신러닝 기반 CFS(Correlation-based Feature Selection)기법과 Random Forest모델을 활용한 BMI(Benthic Macroinvertebrate Index) 예측에 관한 연구)

  • Go, Woo-Seok;Yoon, Chun Gyeong;Rhee, Han-Pil;Hwang, Soon-Jin;Lee, Sang-Woo
    • Journal of Korean Society on Water Environment
    • /
    • v.35 no.5
    • /
    • pp.425-431
    • /
    • 2019
  • Recently, people have been attracting attention to the good quality of water resources as well as water welfare. to improve the quality of life. This study is a papers on the prediction of benthic macroinvertebrate index (BMI), which is a aquatic ecological health, using the machine learning based CFS (Correlation-based Feature Selection) method and the random forest model to compare the measured and predicted values of the BMI. The data collected from the Han River's branch for 10 years are extracted and utilized in 1312 data. Through the utilized data, Pearson correlation analysis showed a lack of correlation between single factor and BMI. The CFS method for multiple regression analysis was introduced. This study calculated 10 factors(water temperature, DO, electrical conductivity, turbidity, BOD, $NH_3-N$, T-N, $PO_4-P$, T-P, Average flow rate) that are considered to be related to the BMI. The random forest model was used based on the ten factors. In order to prove the validity of the model, $R^2$, %Difference, NSE (Nash-Sutcliffe Efficiency) and RMSE (Root Mean Square Error) were used. Each factor was 0.9438, -0.997, and 0,992, and accuracy rate was 71.6% level. As a result, These results can suggest the future direction of water resource management and Pre-review function for water ecological prediction.

The ensemble approach in comparison with the diverse feature selection techniques for estimating NPPs parameters using the different learning algorithms of the feed-forward neural network

  • Moshkbar-Bakhshayesh, Khalil
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.3944-3951
    • /
    • 2021
  • Several reasons such as no free lunch theorem indicate that there is not a universal Feature selection (FS) technique that outperforms other ones. Moreover, some approaches such as using synthetic dataset, in presence of large number of FS techniques, are very tedious and time consuming task. In this study to tackle the issue of dependency of estimation accuracy on the selected FS technique, a methodology based on the heterogeneous ensemble is proposed. The performance of the major learning algorithms of neural network (i.e. the FFNN-BR, the FFNN-LM) in combination with the diverse FS techniques (i.e. the NCA, the F-test, the Kendall's tau, the Pearson, the Spearman, and the Relief) and different combination techniques of the heterogeneous ensemble (i.e. the Min, the Median, the Arithmetic mean, and the Geometric mean) are considered. The target parameters/transients of Bushehr nuclear power plant (BNPP) are examined as the case study. The results show that the Min combination technique gives the more accurate estimation. Therefore, if the number of FS techniques is m and the number of learning algorithms is n, by the heterogeneous ensemble, the search space for acceptable estimation of the target parameters may be reduced from n × m to n × 1. The proposed methodology gives a simple and practical approach for more reliable and more accurate estimation of the target parameters compared to the methods such as the use of synthetic dataset or trial and error methods.

Classifying Cancer Using Partially Correlated Genes Selected by Forward Selection Method (전진선택법에 의해 선택된 부분 상관관계의 유전자들을 이용한 암 분류)

  • 유시호;조성배
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.83-92
    • /
    • 2004
  • Gene expression profile is numerical data of gene expression level from organism measured on the microarray. Generally, each specific tissue indicates different expression levels in related genes, so that we can classify cancer with gene expression profile. Because not all the genes are related to classification, it is needed to select related genes that is called feature selection. This paper proposes a new gene selection method using forward selection method in regression analysis. This method reduces redundant information in the selected genes to have more efficient classification. We used k-nearest neighbor as a classifier and tested with colon cancer dataset. The results are compared with Pearson's coefficient and Spearman's coefficient methods and the proposed method showed better performance. It showed 90.3% accuracy in classification. The method also successfully applied to lymphoma cancer dataset.

Analyzing Factors Contributing to Research Performance using Backpropagation Neural Network and Support Vector Machine

  • Ermatita, Ermatita;Sanmorino, Ahmad;Samsuryadi, Samsuryadi;Rini, Dian Palupi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.153-172
    • /
    • 2022
  • In this study, the authors intend to analyze factors contributing to research performance using Backpropagation Neural Network and Support Vector Machine. The analyzing factors contributing to lecturer research performance start from defining the features. The next stage is to collect datasets based on defining features. Then transform the raw dataset into data ready to be processed. After the data is transformed, the next stage is the selection of features. Before the selection of features, the target feature is determined, namely research performance. The selection of features consists of Chi-Square selection (U), and Pearson correlation coefficient (CM). The selection of features produces eight factors contributing to lecturer research performance are Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Qualification (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). To analyze the factors, two data mining classifiers were involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). Evaluation of the data mining classifier with an accuracy score for BPNN of 95 percent, and SVM of 92 percent. The essence of this analysis is not to find the highest accuracy score, but rather whether the factors can pass the test phase with the expected results. The findings of this study reveal the factors that have a significant impact on research performance and vice versa.

Direct Divergence Approximation between Probability Distributions and Its Applications in Machine Learning

  • Sugiyama, Masashi;Liu, Song;du Plessis, Marthinus Christoffel;Yamanaka, Masao;Yamada, Makoto;Suzuki, Taiji;Kanamori, Takafumi
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.99-111
    • /
    • 2013
  • Approximating a divergence between two probability distributions from their samples is a fundamental challenge in statistics, information theory, and machine learning. A divergence approximator can be used for various purposes, such as two-sample homogeneity testing, change-point detection, and class-balance estimation. Furthermore, an approximator of a divergence between the joint distribution and the product of marginals can be used for independence testing, which has a wide range of applications, including feature selection and extraction, clustering, object matching, independent component analysis, and causal direction estimation. In this paper, we review recent advances in divergence approximation. Our emphasis is that directly approximating the divergence without estimating probability distributions is more sensible than a naive two-step approach of first estimating probability distributions and then approximating the divergence. Furthermore, despite the overwhelming popularity of the Kullback-Leibler divergence as a divergence measure, we argue that alternatives such as the Pearson divergence, the relative Pearson divergence, and the $L^2$-distance are more useful in practice because of their computationally efficient approximability, high numerical stability, and superior robustness against outliers.

Assessment of Landslide Susceptibility in Jecheon Using Deep Learning Based on Exploratory Data Analysis (데이터 탐색을 활용한 딥러닝 기반 제천 지역 산사태 취약성 분석)

  • Sang-A Ahn;Jung-Hyun Lee;Hyuck-Jin Park
    • The Journal of Engineering Geology
    • /
    • v.33 no.4
    • /
    • pp.673-687
    • /
    • 2023
  • Exploratory data analysis is the process of observing and understanding data collected from various sources to identify their distributions and correlations through their structures and characterization. This process can be used to identify correlations among conditioning factors and select the most effective factors for analysis. This can help the assessment of landslide susceptibility, because landslides are usually triggered by multiple factors, and the impacts of these factors vary by region. This study compared two stages of exploratory data analysis to examine the impact of the data exploration procedure on the landslide prediction model's performance with respect to factor selection. Deep-learning-based landslide susceptibility analysis used either a combinations of selected factors or all 23 factors. During the data exploration phase, we used a Pearson correlation coefficient heat map and a histogram of random forest feature importance. We then assessed the accuracy of our deep-learning-based analysis of landslide susceptibility using a confusion matrix. Finally, a landslide susceptibility map was generated using the landslide susceptibility index derived from the proposed analysis. The analysis revealed that using all 23 factors resulted in low accuracy (55.90%), but using the 13 factors selected in one step of exploration improved the accuracy to 81.25%. This was further improved to 92.80% using only the nine conditioning factors selected during both steps of the data exploration. Therefore, exploratory data analysis selected the conditioning factors most suitable for landslide susceptibility analysis and thereby improving the performance of the analysis.