• Title/Summary/Keyword: Outlier handling

Search Result 12, Processing Time 0.018 seconds

A Robust Energy Consumption Forecasting Model using ResNet-LSTM with Huber Loss

  • Albelwi, Saleh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.301-307
    • /
    • 2022
  • Energy consumption has grown alongside dramatic population increases. Statistics show that buildings in particular utilize a significant amount of energy, worldwide. Because of this, building energy prediction is crucial to best optimize utilities' energy plans and also create a predictive model for consumers. To improve energy prediction performance, this paper proposes a ResNet-LSTM model that combines residual networks (ResNets) and long short-term memory (LSTM) for energy consumption prediction. ResNets are utilized to extract complex and rich features, while LSTM has the ability to learn temporal correlation; the dense layer is used as a regression to forecast energy consumption. To make our model more robust, we employed Huber loss during the optimization process. Huber loss obtains high efficiency by handling minor errors quadratically. It also takes the absolute error for large errors to increase robustness. This makes our model less sensitive to outlier data. Our proposed system was trained on historical data to forecast energy consumption for different time series. To evaluate our proposed model, we compared our model's performance with several popular machine learning and deep learning methods such as linear regression, neural networks, decision tree, and convolutional neural networks, etc. The results show that our proposed model predicted energy consumption most accurately.

Relevancy contemplation in medical data analytics and ranking of feature selection algorithms

  • P. Antony Seba;J. V. Bibal Benifa
    • ETRI Journal
    • /
    • v.45 no.3
    • /
    • pp.448-461
    • /
    • 2023
  • This article performs a detailed data scrutiny on a chronic kidney disease (CKD) dataset to select efficient instances and relevant features. Data relevancy is investigated using feature extraction, hybrid outlier detection, and handling of missing values. Data instances that do not influence the target are removed using data envelopment analysis to enable reduction of rows. Column reduction is achieved by ranking the attributes through feature selection methodologies, namely, extra-trees classifier, recursive feature elimination, chi-squared test, analysis of variance, and mutual information. These methodologies are ranked via Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) using weight optimization to identify the optimal features for model building from the CKD dataset to facilitate better prediction while diagnosing the severity of the disease. An efficient hybrid ensemble and novel similarity-based classifiers are built using the pruned dataset, and the results are thereafter compared with random forest, AdaBoost, naive Bayes, k-nearest neighbors, and support vector machines. The hybrid ensemble classifier yields a better prediction accuracy of 98.31% for the features selected by extra tree classifier (ETC), which is ranked as the best by TOPSIS.