DOI QR코드

DOI QR Code

Performance Evaluation of a Feature-Importance-based Feature Selection Method for Time Series Prediction

  • Hyun, Ahn (Division of Software Convergence, Hanshin University)
  • 투고 : 2022.10.18
  • 심사 : 2022.11.02
  • 발행 : 2023.03.31

초록

Various machine-learning models may yield high predictive power for massive time series for time series prediction. However, these models are prone to instability in terms of computational cost because of the high dimensionality of the feature space and nonoptimized hyperparameter settings. Considering the potential risk that model training with a high-dimensional feature set can be time-consuming, we evaluate a feature-importance-based feature selection method to derive a tradeoff between predictive power and computational cost for time series prediction. We used two machine learning techniques for performance evaluation to generate prediction models from a retail sales dataset. First, we ranked the features using impurity- and Local Interpretable Model-agnostic Explanations (LIME) -based feature importance measures in the prediction models. Then, the recursive feature elimination method was applied to eliminate unimportant features sequentially. Consequently, we obtained a subset of features that could lead to reduced model training time while preserving acceptable model performance.

키워드

과제정보

This work was supported by Hanshin University Research Grant.

참고문헌

  1. S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, Dec. 1997. DOI: 10.1162/neco.1997.9.8.1735.
  2. Facebook, Prophet: Automatic forecasting procedure, [Online] Available: https://pypi.org/project/fbprophet.
  3. Kaggle, Rossmann store sales: Forecast sales using store, promotion, and competitor data, 2015. [Online] Available: https://www.kaggle.com/c/rossmann-store-sales.
  4. S. Kohli, G. T. Godwin, and S. Urolagin, "Sales prediction using linear and KNN regression," in Advances in Machine Learning and Computational Intelligence, Springer, Singapore, pp. 321-329, 2021. DOI: 10.1007/978-981-15-5243-4_29.
  5. T. Weng, W. Liu, and J. Xiao, "Supply chain sales forecasting based on light GBM and LSTM combination model," Industrial Management & Data Systems, vol. 120, no. 2, pp. 265-279, Sep. 2019. DOI: 10.1108/IMDS-03-2019-0170.
  6. Kaggle, [Online] Available: https://www.kaggle.com.
  7. G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and TY. Liu, "Lightgbm: A highly efficient gradient boosting decision tree," in Advances in Neural Information Processing Systems, pp. 3149-3157, 2017.
  8. J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang, and H. Liu, "Feature selection: A data perspective," ACM Computing Surveys, vol. 50, no. 6, pp. 1-45, Dec. 2017. DOI: 10.1145/3136625.
  9. Y. Saeys, T. Abeel, and Y. V. Peer, "Robust feature selection using ensemble feature selection techniques," in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Antwerp, Belgium, pp. 313-325, 2008. DOI: 10.1007/978-3-540-87481-2_21.
  10. B. H. Menze, B. M. Kelm, R. Masuch, U. Himmelreich, P. Bachert, W. Petrich, and F. A. Hamprecht, "A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data," BMC Bioinformatics, vol. 10, no. 213, pp. 1-16, Jul. 2009. DOI: 10.1186/1471-2105-10-213.
  11. F. Pan, T. Converse, D. Ahn, F. Salvetti, and G. Donato, "Feature selection for ranking using boosted trees," in Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, pp. 2025-2028, 2009. DOI: 10.1145/1645953.1646292.
  12. H. Jeon and S. Oh, "Hybrid-recursive feature elimination for efficient feature selection," Applied Sciences, vol. 10, no. 9, pp. 3211, May 2020. DOI: 10.3390/app10093211.
  13. L. Zhang and Q. Duan, "A feature selection method for multi-label text based on feature importance," Applied Sciences, vol. 9, no. 4, pp. 665, Feb. 2019. DOI: 10.3390/app9040665.
  14. E. Fezer, D. Raab, and A. Theissler, "XplainableClusterExplorer: a novel approach for interactive feature selection for clustering," in Proceedings of the 13th International Symposium on Visual Information Communication and Interaction, Eindhoven, Netherlands, pp. 1-5, 2020. DOI: 10.1145/3430036.3430066.
  15. X. Man and E. P. Chan, "The best way to select features? comparing mda, lime, and shap," The Journal of Financial Data Science, vol. 3, no. 1, pp. 127-139, 2021. DOI: 10.3905/jfds.2020.1.047.
  16. M. T. Ribeiro, S. Singh, and C. Guestrin, "Why should I trust you?: Explaining the predictions of any classifier," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, USA, pp. 1135-1144, 2016. DOI: 10.1145/2939672.2939778.
  17. T. K. Ho, "Random decision forests," in Proceedings of 3rd International Conference on Document Analysis and Recognition, Montreal, Canada, vol. 1, pp. 278-282, 1995. DOI: 10.1109/ICDAR.1995.598994.