• Title/Summary/Keyword: data selection

Search Result 5,731, Processing Time 0.035 seconds

Grid Resource Selection System Using Decision Tree Method (의사결정 트리 기법을 이용한 그리드 자원선택 시스템)

  • Noh, Chang-Hyeon;Cho, Kyu-Cheol;Ma, Yong-Beom;Lee, Jong-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • In order to high-performance data Processing, effective resource selection is needed since grid resources are composed of heterogeneous networks and OS systems in the grid environment. In this paper. we classify grid resources with data properties and user requirements for resource selection using a decision tree method. Our resource selection method can provide suitable resource selection methodology using classification with a decision tree to grid users. This paper evaluates our grid system performance with throughput. utilization, job loss, and average of turn-around time and shows experiment results of our resource selection model in comparison with those of existing resource selection models such as Condor-G and Nimrod-G. These experiment results showed that our resource selection model provides a vision of efficient grid resource selection methodology.

  • PDF

Landslide susceptibility assessment using feature selection-based machine learning models

  • Liu, Lei-Lei;Yang, Can;Wang, Xiao-Mi
    • Geomechanics and Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-16
    • /
    • 2021
  • Machine learning models have been widely used for landslide susceptibility assessment (LSA) in recent years. The large number of inputs or conditioning factors for these models, however, can reduce the computation efficiency and increase the difficulty in collecting data. Feature selection is a good tool to address this problem by selecting the most important features among all factors to reduce the size of the input variables. However, two important questions need to be solved: (1) how do feature selection methods affect the performance of machine learning models? and (2) which feature selection method is the most suitable for a given machine learning model? This paper aims to address these two questions by comparing the predictive performance of 13 feature selection-based machine learning (FS-ML) models and 5 ordinary machine learning models on LSA. First, five commonly used machine learning models (i.e., logistic regression, support vector machine, artificial neural network, Gaussian process and random forest) and six typical feature selection methods in the literature are adopted to constitute the proposed models. Then, fifteen conditioning factors are chosen as input variables and 1,017 landslides are used as recorded data. Next, feature selection methods are used to obtain the importance of the conditioning factors to create feature subsets, based on which 13 FS-ML models are constructed. For each of the machine learning models, a best optimized FS-ML model is selected according to the area under curve value. Finally, five optimal FS-ML models are obtained and applied to the LSA of the studied area. The predictive abilities of the FS-ML models on LSA are verified and compared through the receive operating characteristic curve and statistical indicators such as sensitivity, specificity and accuracy. The results showed that different feature selection methods have different effects on the performance of LSA machine learning models. FS-ML models generally outperform the ordinary machine learning models. The best FS-ML model is the recursive feature elimination (RFE) optimized RF, and RFE is an optimal method for feature selection.

AutoFe-Sel: A Meta-learning based methodology for Recommending Feature Subset Selection Algorithms

  • Irfan Khan;Xianchao Zhang;Ramesh Kumar Ayyasam;Rahman Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1773-1793
    • /
    • 2023
  • Automated machine learning, often referred to as "AutoML," is the process of automating the time-consuming and iterative procedures that are associated with the building of machine learning models. There have been significant contributions in this area across a number of different stages of accomplishing a data-mining task, including model selection, hyper-parameter optimization, and preprocessing method selection. Among them, preprocessing method selection is a relatively new and fast growing research area. The current work is focused on the recommendation of preprocessing methods, i.e., feature subset selection (FSS) algorithms. One limitation in the existing studies regarding FSS algorithm recommendation is the use of a single learner for meta-modeling, which restricts its capabilities in the metamodeling. Moreover, the meta-modeling in the existing studies is typically based on a single group of data characterization measures (DCMs). Nonetheless, there are a number of complementary DCM groups, and their combination will allow them to leverage their diversity, resulting in improved meta-modeling. This study aims to address these limitations by proposing an architecture for preprocess method selection that uses ensemble learning for meta-modeling, namely AutoFE-Sel. To evaluate the proposed method, we performed an extensive experimental evaluation involving 8 FSS algorithms, 3 groups of DCMs, and 125 datasets. Results show that the proposed method achieves better performance compared to three baseline methods. The proposed architecture can also be easily extended to other preprocessing method selections, e.g., noise-filter selection and imbalance handling method selection.

Analyzing Factors Contributing to Research Performance using Backpropagation Neural Network and Support Vector Machine

  • Ermatita, Ermatita;Sanmorino, Ahmad;Samsuryadi, Samsuryadi;Rini, Dian Palupi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.153-172
    • /
    • 2022
  • In this study, the authors intend to analyze factors contributing to research performance using Backpropagation Neural Network and Support Vector Machine. The analyzing factors contributing to lecturer research performance start from defining the features. The next stage is to collect datasets based on defining features. Then transform the raw dataset into data ready to be processed. After the data is transformed, the next stage is the selection of features. Before the selection of features, the target feature is determined, namely research performance. The selection of features consists of Chi-Square selection (U), and Pearson correlation coefficient (CM). The selection of features produces eight factors contributing to lecturer research performance are Scientific Papers (U: 154.38, CM: 0.79), Number of Citation (U: 95.86, CM: 0.70), Conference (U: 68.67, CM: 0.57), Grade (U: 10.13, CM: 0.29), Grant (U: 35.40, CM: 0.36), IPR (U: 19.81, CM: 0.27), Qualification (U: 2.57, CM: 0.26), and Grant Awardee (U: 2.66, CM: 0.26). To analyze the factors, two data mining classifiers were involved, Backpropagation Neural Networks (BPNN) and Support Vector Machine (SVM). Evaluation of the data mining classifier with an accuracy score for BPNN of 95 percent, and SVM of 92 percent. The essence of this analysis is not to find the highest accuracy score, but rather whether the factors can pass the test phase with the expected results. The findings of this study reveal the factors that have a significant impact on research performance and vice versa.

Feature selection for text data via topic modeling (토픽 모형을 이용한 텍스트 데이터의 단어 선택)

  • Woosol, Jang;Ye Eun, Kim;Won, Son
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.739-754
    • /
    • 2022
  • Usually, text data consists of many variables, and some of them are closely correlated. Such multi-collinearity often results in inefficient or inaccurate statistical analysis. For supervised learning, one can select features by examining the relationship between target variables and explanatory variables. On the other hand, for unsupervised learning, since target variables are absent, one cannot use such a feature selection procedure as in supervised learning. In this study, we propose a word selection procedure that employs topic models to find latent topics. We substitute topics for the target variables and select terms which show high relevance for each topic. Applying the procedure to real data, we found that the proposed word selection procedure can give clear topic interpretation by removing high-frequency words prevalent in various topics. In addition, we observed that, by applying the selected variables to the classifiers such as naïve Bayes classifiers and support vector machines, the proposed feature selection procedure gives results comparable to those obtained by using class label information.

Gait-Based Gender Classification Using a Correlation-Based Feature Selection Technique

  • Beom Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.55-66
    • /
    • 2024
  • Gender classification techniques have received a lot of attention from researchers because they can be used in various fields such as forensics, surveillance systems, and demographic studies. As previous studies have shown that there are distinctive features between male and female gait, various techniques have been proposed to classify gender from three dimensional(3-D) gait data. However, some of the gait features extracted from 3-D gait data using existing techniques are similar or redundant to each other or do not help in gender classification. In this study, we propose a method to select features that are useful for gender classification using a correlation-based feature selection technique. To demonstrate the effectiveness of the proposed feature selection technique, we compare the performance of gender classification models before and after applying the proposed feature selection technique using a 3-D gait dataset available on the Internet. Eight machine learning algorithms applicable to binary classification problems were utilized in the experiments. The experimental results show that the proposed feature selection technique can reduce the number of features by 22, from 82 to 60, while maintaining the gender classification performance.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

An Energy Efficient Intelligent Method for Sensor Node Selection to Improve the Data Reliability in Internet of Things Networks

  • Remesh Babu, KR;Preetha, KG;Saritha, S;Rinil, KR
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3151-3168
    • /
    • 2021
  • Internet of Things (IoT) connects several objects with embedded sensors and they are capable of exchanging information between devices to create a smart environment. IoT smart devices have limited resources, such as batteries, computing power, and bandwidth, but comprehensive sensing causes severe energy restrictions, lowering data quality. The main objective of the proposal is to build a hybrid protocol which provides high data quality and reduced energy consumption in IoT sensor network. The hybrid protocol gives a flexible and complete solution for sensor selection problem. It selects a subset of active sensor nodes in the network which will increase the data quality and optimize the energy consumption. Since the unused sensor nodes switch off during the sensing phase, the energy consumption is greatly reduced. The hybrid protocol uses Dijkstra's algorithm for determining the shortest path for sensing data and Ant colony inspired variable path selection algorithm for selecting active nodes in the network. The missing data due to inactive sensor nodes is reconstructed using enhanced belief propagation algorithm. The proposed hybrid method is evaluated using real sensor data and the demonstrated results show significant improvement in energy consumption, data utility and data reconstruction rate compared to other existing methods.

A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

  • Ghasemi, Jahan B.;Zolfonoun, Ehsan
    • Bulletin of the Korean Chemical Society
    • /
    • v.33 no.5
    • /
    • pp.1527-1535
    • /
    • 2012
  • Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

Priority Based Interface Selection for Overlaying Heterogeneous Networks

  • Chowdhury, Mostafa Zaman;Jang, Yeong-Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.7B
    • /
    • pp.1009-1017
    • /
    • 2010
  • Offering of different attractive opportunities by different wireless technologies trends the convergence of heterogeneous networks for the future wireless communication system. To make a seamless handover among the heterogeneous networks, the optimization of the power consumption, and optimal selection of interface are the challenging issues. The access of multi interfaces simultaneously reduces the handover latency and data loss in heterogeneous handover. The mobile node (MN) maintains one interface connection while other interface is used for handover process. However, it causes much battery power consumption. In this paper we propose an efficient interface selection scheme including interface selection algorithms, interface selection procedures considering battery power consumption and user mobility with other existing parameters for overlaying networks. We also propose a priority based network selection scheme according to the service types. MN‘s battery power level, provision of QoS/QoE and our proposed priority parameters are considered as more important parameters for our interface selection algorithm. The performances of the proposed scheme are verified using numerical analysis.