• Title/Summary/Keyword: selection of features

Search Result 907, Processing Time 0.029 seconds

The Effect of Selection Attribute of HMR Product on the Consumer Purchasing Intention of an Single Household - Centered on the Regulation Effect of Consumer Online Reviews - (HMR 상품의 선택속성이 1인 가구의 소비자 구매의도에 미치는 영향 - 소비자 온라인 리뷰의 조절효과 중심으로 -)

  • Kim, Hee-Yeon
    • Culinary science and hospitality research
    • /
    • v.22 no.8
    • /
    • pp.109-121
    • /
    • 2016
  • This study analyzed the effect of five sub-variables' attribute of HMR: features of information, diversity, promptness, price and convenience, on the consumer purchasing intention. In addition, the regulation effect of positive reviews and negative reviews of consumers' online reviews between HMR selection attribute and purchasing intention was also tested. Results are following. First, convenience feature (B=.577, p<.001) and diversity feature (B=.093, p<.01) among the effect of HMR selection attribute had a positive (+) effect on purchasing intention. On the other hand, promptness feature (B=.235, p<.001) and price feature (B=.161, p<.001), and information feature (B=.288, p<.001) were not significant effect on purchasing intention. Second, result of regulation effect of the positive reviews of consumer's online review between the selection attribute of the HMR product and consumers' purchasing intention, in the first-stage model in which the selection attribute of the HMR product is input as an independent variable, there was a significant positive (+) effect on all the features of convenience, diversity, promptness, price, and information. In addition, there was significant positive (+) main effect (B=.472, p<.001) in the second step model in which the consumers' positive reviews, that is a regulation variable. Furthermore, the feature of price (B=.068, p<.05) had a significant positive (+) effect in the third stage in which the selection attribute of the HMR product that is an independent variable and the interaction of the positive review. However, the feature of information (B=-.063, p<.05) showed negative (-) effect, and there was no effect on the features of convenience, diversity, and promptness. Third, as a result of testing the regulation effect of the negative reviews of consumers' online reviews between HMR product selection attribute and consumers' purchasing intention, in the first-stage model in which the selection attribute of the HMR product was a positive (+) effect on all the features of convenience, diversity, promptness, price, and information. In the second-stage model in which consumers' negative reviews (B=-.113, p<.001) had negative (-) effect. In the third-stage in which the selection attribute of the HMR product and the interactions of the negative reviews was a positive (+) effect with the feature of price (B=.113, p<.01). Last, there was no effect at all on the features of convenience, promptness, and information.

Feature Selection for Case-Based Reasoning using the Order of Selection and Elimination Effects of Individual Features (개별 속성의 선택 및 제거효과 순위를 이용한 사례기반 추론의 속성 선정)

  • 이재식;이혁희
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.2
    • /
    • pp.117-137
    • /
    • 2002
  • A CBR(Case-Based Reasoning) system solves the new problems by adapting the solutions that were used to solve the old problems. Past cases are retained in the case base, each in a specific form that is determined by features. Features are selected for the purpose of representing the case in the best way. Similar cases are retrieved by comparing the feature values and calculating the similarity scores. Therefore, the performance of CBR depends on the selected feature subsets. In this research, we measured the Selection Effect and the Elimination Effect of each feature. The Selection Effect is measured by performing the CBR with only one feature, and the Elimination Effect is measured by performing the CBR without only one feature. Based on these measurements, the feature subsets are selected. The resulting CBR showed better performance in terms of accuracy and efficiency than the CBR with all features.

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Set Covering-based Feature Selection of Large-scale Omics Data (Set Covering 기반의 대용량 오믹스데이터 특징변수 추출기법)

  • Ma, Zhengyu;Yan, Kedong;Kim, Kwangsoo;Ryoo, Hong Seo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.4
    • /
    • pp.75-84
    • /
    • 2014
  • In this paper, we dealt with feature selection problem of large-scale and high-dimensional biological data such as omics data. For this problem, most of the previous approaches used simple score function to reduce the number of original variables and selected features from the small number of remained variables. In the case of methods that do not rely on filtering techniques, they do not consider the interactions between the variables, or generate approximate solutions to the simplified problem. Unlike them, by combining set covering and clustering techniques, we developed a new method that could deal with total number of variables and consider the combinatorial effects of variables for selecting good features. To demonstrate the efficacy and effectiveness of the method, we downloaded gene expression datasets from TCGA (The Cancer Genome Atlas) and compared our method with other algorithms including WEKA embeded feature selection algorithms. In the experimental results, we showed that our method could select high quality features for constructing more accurate classifiers than other feature selection algorithms.

A Decision Tree Induction using Genetic Programming with Sequentially Selected Features (순차적으로 선택된 특성과 유전 프로그래밍을 이용한 결정나무)

  • Kim Hyo-Jung;Park Chong-Sun
    • Korean Management Science Review
    • /
    • v.23 no.1
    • /
    • pp.63-74
    • /
    • 2006
  • Decision tree induction algorithm is one of the most widely used methods in classification problems. However, they could be trapped into a local minimum and have no reasonable means to escape from it if tree algorithm uses top-down search algorithm. Further, if irrelevant or redundant features are included in the data set, tree algorithms produces trees that are less accurate than those from the data set with only relevant features. We propose a hybrid algorithm to generate decision tree that uses genetic programming with sequentially selected features. Correlation-based Feature Selection (CFS) method is adopted to find relevant features which are fed to genetic programming sequentially to find optimal trees at each iteration. The new proposed algorithm produce simpler and more understandable decision trees as compared with other decision trees and it is also effective in producing similar or better trees with relatively smaller set of features in the view of cross-validation accuracy.

Gait-Based Gender Classification Using a Correlation-Based Feature Selection Technique

  • Beom Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.55-66
    • /
    • 2024
  • Gender classification techniques have received a lot of attention from researchers because they can be used in various fields such as forensics, surveillance systems, and demographic studies. As previous studies have shown that there are distinctive features between male and female gait, various techniques have been proposed to classify gender from three dimensional(3-D) gait data. However, some of the gait features extracted from 3-D gait data using existing techniques are similar or redundant to each other or do not help in gender classification. In this study, we propose a method to select features that are useful for gender classification using a correlation-based feature selection technique. To demonstrate the effectiveness of the proposed feature selection technique, we compare the performance of gender classification models before and after applying the proposed feature selection technique using a 3-D gait dataset available on the Internet. Eight machine learning algorithms applicable to binary classification problems were utilized in the experiments. The experimental results show that the proposed feature selection technique can reduce the number of features by 22, from 82 to 60, while maintaining the gender classification performance.

Sparse and low-rank feature selection for multi-label learning

  • Lim, Hyunki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we propose a feature selection technique for multi-label classification. Many existing feature selection techniques have selected features by calculating the relation between features and labels such as a mutual information scale. However, since the mutual information measure requires a joint probability, it is difficult to calculate the joint probability from an actual premise feature set. Therefore, it has the disadvantage that only a few features can be calculated and only local optimization is possible. Away from this regional optimization problem, we propose a feature selection technique that constructs a low-rank space in the entire given feature space and selects features with sparsity. To this end, we designed a regression-based objective function using Nuclear norm, and proposed an algorithm of gradient descent method to solve the optimization problem of this objective function. Based on the results of multi-label classification experiments on four data and three multi-label classification performance, the proposed methodology showed better performance than the existing feature selection technique. In addition, it was showed by experimental results that the performance change is insensitive even to the parameter value change of the proposed objective function.

Data Mining-Aided Automatic Landslide Detection Using Airborne Laser Scanning Data in Densely Forested Tropical Areas

  • Mezaal, Mustafa Ridha;Pradhan, Biswajeet
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.1
    • /
    • pp.45-74
    • /
    • 2018
  • Landslide is a natural hazard that threats lives and properties in many areas around the world. Landslides are difficult to recognize, particularly in rainforest regions. Thus, an accurate, detailed, and updated inventory map is required for landslide susceptibility, hazard, and risk analyses. The inconsistency in the results obtained using different features selection techniques in the literature has highlighted the importance of evaluating these techniques. Thus, in this study, six techniques of features selection were evaluated. Very-high-resolution LiDAR point clouds and orthophotos were acquired simultaneously in a rainforest area of Cameron Highlands, Malaysia by airborne laser scanning (LiDAR). A fuzzy-based segmentation parameter (FbSP optimizer) was used to optimize the segmentation parameters. Training samples were evaluated using a stratified random sampling method and set to 70% training samples. Two machine-learning algorithms, namely, Support Vector Machine (SVM) and Random Forest (RF), were used to evaluate the performance of each features selection algorithm. The overall accuracies of the SVM and RF models revealed that three of the six algorithms exhibited higher ranks in landslide detection. Results indicated that the classification accuracies of the RF classifier were higher than the SVM classifier using either all features or only the optimal features. The proposed techniques performed well in detecting the landslides in a rainforest area of Malaysia, and these techniques can be easily extended to similar regions.

Compositional Feature Selection and Its Effects on Bandgap Prediction by Machine Learning (기계학습을 이용한 밴드갭 예측과 소재의 조성기반 특성인자의 효과)

  • Chunghee Nam
    • Korean Journal of Materials Research
    • /
    • v.33 no.4
    • /
    • pp.164-174
    • /
    • 2023
  • The bandgap characteristics of semiconductor materials are an important factor when utilizing semiconductor materials for various applications. In this study, based on data provided by AFLOW (Automatic-FLOW for Materials Discovery), the bandgap of a semiconductor material was predicted using only the material's compositional features. The compositional features were generated using the python module of 'Pymatgen' and 'Matminer'. Pearson's correlation coefficients (PCC) between the compositional features were calculated and those with a correlation coefficient value larger than 0.95 were removed in order to avoid overfitting. The bandgap prediction performance was compared using the metrics of R2 score and root-mean-squared error. By predicting the bandgap with randomforest and xgboost as representatives of the ensemble algorithm, it was found that xgboost gave better results after cross-validation and hyper-parameter tuning. To investigate the effect of compositional feature selection on the bandgap prediction of the machine learning model, the prediction performance was studied according to the number of features based on feature importance methods. It was found that there were no significant changes in prediction performance beyond the appropriate feature. Furthermore, artificial neural networks were employed to compare the prediction performance by adjusting the number of features guided by the PCC values, resulting in the best R2 score of 0.811. By comparing and analyzing the bandgap distribution and prediction performance according to the material group containing specific elements (F, N, Yb, Eu, Zn, B, Si, Ge, Fe Al), various information for material design was obtained.

A Feature Selection for the Recognition of Handwritten Characters based on Two-Dimensional Wavelet Packet (2차원 웨이브렛 패킷에 기반한 필기체 문자인식의 특징선택방법)

  • Kim, Min-Soo;Back, Jang-Sun;Lee, Guee-Sang;Kim, Soo-Hyung
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.521-528
    • /
    • 2002
  • We propose a new approach to the feature selection for the classification of handwritten characters using two-dimensional(2D) wavelet packet bases. To extract key features of an image data, for the dimension reduction Principal Component Analysis(PCA) has been most frequently used. However PCA relies on the eigenvalue system, it is not only sensitive to outliers and perturbations, but has a tendency to select only global features. Since the important features for the image data are often characterized by local information such as edges and spikes, PCA does not provide good solutions to such problems. Also solving an eigenvalue system usually requires high cost in its computation. In this paper, the original data is transformed with 2D wavelet packet bases and the best discriminant basis is searched, from which relevant features are selected. In contrast to PCA solutions, the fast selection of detailed features as well as global features is possible by virtue of the good properties of wavelets. Experiment results on the recognition rates of PCA and our approach are compared to show the performance of the proposed method.