(2) Ardi Pujiyanta
(3) Supriyanto Supriyanto
*corresponding author
AbstractThis paper systematically evaluates the performance of an LSTM baseline model, along with four smoothing augmentation methods (Kalman, Laplace, Moving Average, Savitzky-Golay), under different normalization strategies (Min-Max and Z-Score) for multivariate time-series forecasting. Experiments were conducted on six publicly available datasets (electricity consumption, energy consumption, sensor data, household energy, Indian electricity, and Brazilian temperature), and model performance was comprehensively compared using three metrics: MAPE, RMSE, and R². Results indicate that Laplace smoothing achieved the best performance across five datasets, effectively reducing errors while maintaining high fit quality, demonstrating its advantage in handling highly volatile and noisy time-series data. However, in some instances, Laplace smoothing, along with MA and SG methods, may produce an “over-smoothing” effect, causing forecasts to lose sensitivity to spike fluctuations. The choice of normalization strategy is equally critical: Min-Max is more suitable for data with stable distributions, while Z-Score demonstrates greater advantages for data with large numerical ranges and significant volatility. Notably, in temperature datasets with small sample sizes and high volatility, complex smoothing methods actually degraded performance, making the baseline LSTM + Z-Score the optimal choice. However, the LSTM-Laplace model with Min-Max normalization achieves the best performance among the models. Overall, the study concludes that improving prediction performance relies not only on model architecture but also on optimizing data scale, distribution characteristics, and preprocessing strategies.
KeywordsData preprocessing; Hyperparameter tuning; Forecasting; Time series analysis
|
DOIhttps://doi.org/10.26555/ijain.v12i1.2321 |
Article metricsAbstract views : 287 | PDF views : 103 |
Cite |
Full Text Download
|
References
[1] M. H. Alharbi, “Prediction of the Stock Market Using LSTM, ARIMA, and Hybrid of LSTM-ARIMA Models,” J. Knowl. Manag. Appl. Pract. An Int. J., vol. 7, no. 1, p. 15, 2025, doi: 10.18576/jkmap/070102.
[2] V. R. Thota, S. Engineering, and M. Candidate, “Comparative Study of Time Series Forecasting for Trucking Shipments : Evaluating Arima , Lstm , and Hybrid Arima-Lstm Models,” 2025, p. 2025. [Online]. Available at: https://www.binghamton.edu/ssie/about/venkatesh_thesis_abstract.pdf.
[3] Y. Ding and Y. Zhai, “Intrusion Detection System for NSL-KDD Dataset Using Convolutional Neural Networks,” in Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, New York, NY, USA: ACM, Dec. 2018, pp. 81–85. doi: 10.1145/3297156.3297230.
[4] W. Zhang, Z. Lin, and X. Liu, “Short-term offshore wind power forecasting - A hybrid model based on Discrete Wavelet Transform (DWT), Seasonal Autoregressive Integrated Moving Average (SARIMA), and deep-learning-based Long Short-Term Memory (LSTM),” Renew. Energy, vol. 185, no. February, pp. 611–628, Feb. 2022, doi: 10.1016/j.renene.2021.12.100.
[5] B. Wang et al., “Long short-term memory deep learning model for predicting the dynamic performance of automotive PEMFC system,” Energy AI, vol. 14, no. October, p. 100278, Oct. 2023, doi: 10.1016/j.egyai.2023.100278.
[6] I. Malashin, V. Tynchenko, A. Gantimurov, V. Nelyub, and A. Borodulin, “Applications of Long Short-Term Memory (LSTM) Networks in Polymeric Sciences: A Review,” Polym. 2024, Vol. 16, Page 2607, vol. 16, no. 18, p. 2607, Sep. 2024, doi: 10.3390/POLYM16182607.
[7] H. Alizadegan, B. Rashidi Malki, A. Radmehr, H. Karimi, and M. A. Ilani, “Comparative study of long short-term memory (LSTM), bidirectional LSTM, and traditional machine learning approaches for energy consumption prediction,” Energy Explor. Exploit., vol. 43, no. 1, pp. 281–301, Jan. 2025, doi: 10.1177/01445987241269496/FORMAT/EPUB.
[8] K. E. ArunKumar, D. V. Kalaga, C. M. S. Kumar, M. Kawaji, and T. M. Brenza, “Forecasting of COVID-19 using deep layer Recurrent Neural Networks (RNNs) with Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTM) cells,” Chaos, Solitons & Fractals, vol. 146, no. May, p. 110861, May 2021, doi: 10.1016/j.chaos.2021.110861.
[9] S. Nosouhian, F. Nosouhian, and A. Kazemi Khoshouei, “A Review of Recurrent Neural Network Architecture for Sequence Learning: Comparison between LSTM and GRU,” Preprints, Jul. 2021, p. 7. doi: 10.20944/preprints202107.0252.v1.
[10] X. Qin, X. Hu, H. Liu, W. Shi, and J. Cui, “A Combined Gated Recurrent Unit and Multi-Layer Perception Neural Network Model for Predicting Shale Gas Production,” Processes, vol. 11, no. 3, p. 806, Mar. 2023, doi: 10.3390/pr11030806.
[11] Z. Tarek et al., “An Optimized Model Based on Deep Learning and Gated Recurrent Unit for COVID-19 Death Prediction,” Biomimetics 2023, Vol. 8, vol. 8, no. 7, p. 552, Nov. 2023, doi: 10.3390/biomimetics8070552.
[12] A. Sahi et al., “SGDM-GRU: Spectral graph deep learning based Gated Recurrent Unit model for accurate fake news detection,” Expert Syst. Appl., vol. 281, no. 2, p. 127572, Jul. 2025, doi: 10.1016/j.eswa.2025.127572.
[13] M. M. Taye, “Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions,” Computation, vol. 11, no. 3, p. 52, Mar. 2023, doi: 10.3390/computation11030052.
[14] K. Bian and R. Priyadarshi, “Machine Learning Optimization Techniques: A Survey, Classification, Challenges, and Future Research Issues,” Arch. Comput. Methods Eng., vol. 31, no. 7, pp. 4209–4233, Mar. 2024, doi: 10.1007/s11831-024-10110-w.
[15] R. Moradi, R. Berangi, and B. Minaei, “A survey of regularization strategies for deep models,” Artif. Intell. Rev., vol. 53, no. 6, pp. 3947–3986, Aug. 2020, doi: 10.1007/s10462-019-09784-7.
[16] P. Anand Kumar and S. Sountharrajan, “Insurance claims estimation and fraud detection with optimized deep learning techniques,” Sci. Rep., vol. 15, no. 1, p. 27296, Jul. 2025, doi: 10.1038/s41598-025-12848-0.
[17] Y. Liu, T. Dillon, W. Yu, W. Rahayu, and F. Mostafa, “Noise Removal in the Presence of Significant Anomalies for Industrial IoT Sensor Data in Manufacturing,” IEEE Internet Things J., vol. 7, no. 8, pp. 7084–7096, Aug. 2020, doi: 10.1109/JIOT.2020.2981476.
[18] Y. Shi, X. Ying, and J. Yang, “Deep Unsupervised Domain Adaptation with Time Series Sensor Data: A Survey,” Sensors, vol. 22, no. 15, p. 5507, Jul. 2022, doi: 10.3390/s22155507.
[19] F. Raza, D. Owaki, and M. Hayashibe, “Modeling and Control of a Hybrid Wheeled Legged Robot: Disturbance Analysis,” in 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), IEEE, Jul. 2020, pp. 466–473. doi: 10.1109/AIM43001.2020.9158833.
[20] L. Zhang and L. Hua, “Major Issues in High-Frequency Financial Data Analysis: A Survey of Solutions,” Mathematics, vol. 13, no. 3, p. 347, Jan. 2025, doi: 10.3390/math13030347.
[21] M. Usmani, Z. A. Memon, A. Zulfiqar, and R. Qureshi, “Preptimize: Automation of Time Series Data Preprocessing and Forecasting,” Algorithms, vol. 17, no. 8, p. 332, Aug. 2024, doi: 10.3390/a17080332.
[22] S. F. Stefenon, L. O. Seman, V. C. Mariani, and L. dos S. Coelho, “Aggregating Prophet and Seasonal Trend Decomposition for Time Series Forecasting of Italian Electricity Spot Prices,” Energies, vol. 16, no. 3, p. 1371, Jan. 2023, doi: 10.3390/en16031371.
[23] U. Zbezhkhovska and D. Chumachenko, “Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries,” Computation, vol. 13, no. 6, p. 136, Jun. 2025, doi: 10.3390/computation13060136.
[24] Y. Dai, Y. Wang, M. Leng, X. Yang, and Q. Zhou, “LOWESS smoothing and Random Forest based GRU model: A short-term photovoltaic power generation forecasting method,” Energy, vol. 256, no. October, p. 124661, Oct. 2022, doi: 10.1016/j.energy.2022.124661.
[25] I. E. Livieris, S. Stavroyiannis, L. Iliadis, and P. Pintelas, “Smoothing and stationarity enforcement framework for deep learning time-series forecasting,” Neural Comput. Appl., vol. 33, no. 20, pp. 14021–14035, Oct. 2021, doi: 10.1007/s00521-021-06043-1.
[26] M. Khodarahmi and V. Maihami, “A Review on Kalman Filter Models,” Arch. Comput. Methods Eng., vol. 30, no. 1, pp. 727–747, Jan. 2023, doi: 10.1007/s11831-022-09815-7.
[27] X. Yu and J. Li, “Adaptive Kalman Filtering for Recursive Both Additive Noise and Multiplicative Noise,” IEEE Trans. Aerosp. Electron. Syst., vol. 58, no. 3, pp. 1634–1649, Jun. 2022, doi: 10.1109/TAES.2021.3117896.
[28] A. Y. Alanis, “Exploring Kalman Filtering Applications for Enhancing Artificial Neural Network Learning,” Algorithms, vol. 18, no. 9, p. 587, Sep. 2025, doi: 10.3390/a18090587.
[29] T. Kruse, T. Griebel, and K. Graichen, “Adaptive Kalman Filtering: Measurement and Process Noise Covariance Estimation Using Kalman Smoothing,” IEEE Access, vol. 13, pp. 11863–11875, 2025, doi: 10.1109/ACCESS.2025.3528348.
[30] C. Lei, H. Zhang, Z. Wang, and Q. Miao, “Deep Learning for Demand Forecasting: A Framework Incorporating Variational Mode Decomposition and Attention Mechanism,” Processes, vol. 13, no. 2, p. 594, Feb. 2025, doi: 10.3390/pr13020594.
[31] M. Taktak and F. Derbel, “Evaluating the Impact of Frequency Decomposition Techniques on LSTM-Based Household Energy Consumption Forecasting,” Energies, vol. 18, no. 10, p. 2507, May 2025, doi: 10.3390/en18102507.
[32] J. Kim, H. Kim, H. Kim, D. Lee, and S. Yoon, “A comprehensive survey of deep learning for time series forecasting: architectural diversity and open challenges,” Artif. Intell. Rev., vol. 58, no. 7, p. 216, Apr. 2025, doi: 10.1007/s10462-025-11223-9.
[33] L. Boongasame, J. Muangprathub, and K. Thammarak, “Laor Initialization: A New Weight Initialization Method for the Backpropagation of Deep Learning,” Big Data Cogn. Comput., vol. 9, no. 7, p. 181, Jul. 2025, doi: 10.3390/bdcc9070181.
[34] M. Shantal, Z. Othman, and A. A. Bakar, “A Novel Approach for Data Feature Weighting Using Correlation Coefficients and Min–Max Normalization,” Symmetry (Basel)., vol. 15, no. 12, p. 2185, Dec. 2023, doi: 10.3390/sym15122185.
[35] K. Cabello-Solorzano, I. Ortigosa de Araujo, M. Peña, L. Correia, and A. J. Tallón-Ballesteros, “The Impact of Data Normalization on the Accuracy of Machine Learning Algorithms: A Comparative Analysis,” in Lecture Notes in Networks and Systems, Springer, Cham, 2023, pp. 344–353. doi: 10.1007/978-3-031-42536-3_33.
[36] A. Al-Mekhlafi, S. Klawitter, and F. Klawonn, “Standardization with zlog values improves exploratory data analysis and machine learning for laboratory data,” J. Lab. Med., vol. 48, no. 5, pp. 215–222, Oct. 2024, doi: 10.1515/labmed-2024-0051.
[37] R. Castaldo, K. Pane, E. Nicolai, M. Salvatore, and M. Franzese, “The Impact of Normalization Approaches to Automatically Detect Radiogenomic Phenotypes Characterizing Breast Cancer Receptors Status,” Cancers (Basel)., vol. 12, no. 2, p. 518, Feb. 2020, doi: 10.3390/cancers12020518.
[38] S. Nikbakht, C. Anitescu, and T. Rabczuk, “Optimizing the neural network hyperparameters utilizing genetic algorithm,” J. Zhejiang Univ. A, vol. 22, no. 6, pp. 407–426, Jun. 2021, doi: 10.1631/jzus.A2000384.
[39] L. Liao, H. Li, W. Shang, and L. Ma, “An Empirical Study of the Impact of Hyperparameter Tuning and Model Optimization on the Performance Properties of Deep Neural Networks,” ACM Trans. Softw. Eng. Methodol., vol. 31, no. 3, pp. 1–40, Jul. 2022, doi: 10.1145/3506695.
[40] Y. Li, X. Ren, F. Zhao, and S. Yang, “A Zeroth-Order Adaptive Learning Rate Method to Reduce Cost of Hyperparameter Tuning for Deep Learning,” Appl. Sci., vol. 11, no. 21, p. 10184, Oct. 2021, doi: 10.3390/app112110184.
[41] I. Kandel and M. Castelli, “The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset,” ICT Express, vol. 6, no. 4, pp. 312–315, Dec. 2020, doi: 10.1016/j.icte.2020.04.010.
[42] D. M. Belete and M. D. Huchaiah, “Grid search in hyperparameter optimization of machine learning models for prediction of HIV/AIDS test results,” Int. J. Comput. Appl., vol. 44, no. 9, pp. 875–886, Sep. 2022, doi: 10.1080/1206212X.2021.1974663.
[43] H. Alibrahim and S. A. Ludwig, “Hyperparameter Optimization: Comparing Genetic Algorithm against Grid Search and Bayesian Optimization,” in 2021 IEEE Congress on Evolutionary Computation (CEC), IEEE, Jun. 2021, pp. 1551–1559. doi: 10.1109/CEC45853.2021.9504761.
[44] A. Esmaeili, Z. Ghorrati, and E. T. Matson, “Agent-Based Collaborative Random Search for Hyperparameter Tuning and Global Function Optimization,” Systems, vol. 11, no. 5, p. 228, May 2023, doi: 10.3390/systems11050228.
[45] Y. Ali, E. Awwad, M. Al-Razgan, and A. Maarouf, “Hyperparameter Search for Machine Learning Algorithms for Optimizing the Computational Complexity,” Processes, vol. 11, no. 2, p. 349, Jan. 2023, doi: 10.3390/pr11020349.
[46] R. Sen and S. C. Bhattacharyya, “Off-grid electricity generation with renewable energy technologies in India: An application of HOMER,” Renew. Energy, vol. 62, no. February, pp. 388–398, Feb. 2014, doi: 10.1016/j.renene.2013.07.028.
[47] P. Balaprakash, M. Salim, T. D. Uram, V. Vishwanath, and S. M. Wild, “DeepHyper: Asynchronous Hyperparameter Search for Deep Neural Networks,” in 2018 IEEE 25th International Conference on High Performance Computing (HiPC), IEEE, Dec. 2018, pp. 42–51. doi: 10.1109/HiPC.2018.00014.
[48] B. Bischl et al., “Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges,” WIREs Data Min. Knowl. Discov., vol. 13, no. 2, p. 1484, Mar. 2023, doi: 10.1002/widm.1484.
[49] M. M. Musthafa, I. Manimozhi, T. R. Mahesh, and S. Guluwadi, “Optimizing double-layered convolutional neural networks for efficient lung cancer classification through hyperparameter optimization and advanced image pre-processing techniques,” BMC Med. Inform. Decis. Mak., vol. 24, no. 1, p. 142, May 2024, doi: 10.1186/s12911-024-02553-9.
[50] M. S. Khan, T. Peng, H. Akhlaq, and M. Adeel Khan, “Comparative Analysis of Automated Machine Learning for Hyperparameter Optimization and Explainable Artificial Intelligence Models,” IEEE Access, vol. 13, pp. 84966–84991, 2025, doi: 10.1109/ACCESS.2025.3566427.
[51] M. Ahmadi, H. Aly, and M. Khashei, “Enhancing power grid stability with a hybrid framework for wind power forecasting: Integrating Kalman Filtering, Deep Residual Learning, and Bidirectional LSTM,” Energy, vol. 334, no. October, p. 137752, Oct. 2025, doi: 10.1016/j.energy.2025.137752.
[52] I. Aizenberg and Y. Tovt, “Intelligent Frequency Domain Image Filtering Based on a Multilayer Neural Network with Multi-Valued Neurons,” Algorithms, vol. 18, no. 8, p. 461, Jul. 2025, doi: 10.3390/a18080461.
[53] S. Chakrabarty, R. Talwadker, and T. Mukherjee, “ScarceGAN,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, New York, NY, USA: ACM, Oct. 2021, pp. 140–150. doi: 10.1145/3459637.3482474.
[54] Y. Su, C. Cui, and H. Qu, “Self-Attentive Moving Average for Time Series Prediction,” Appl. Sci., vol. 12, no. 7, p. 3602, Apr. 2022, doi: 10.3390/app12073602.
[55] S. Zhang, X. Ma, Z. Fang, H. Pan, G. Yang, and G. R. Arce, “Financial time series forecasting based on momentum-driven graph signal processing,” Appl. Intell., vol. 53, no. 18, pp. 20950–20966, Sep. 2023, doi: 10.1007/s10489-023-04563-y.
[56] A. Lazcano, M. A. Jaramillo-Morán, and J. E. Sandubete, “Back to Basics: The Power of the Multilayer Perceptron in Financial Time Series Forecasting,” Mathematics, vol. 12, no. 12, p. 1920, Jun. 2024, doi: 10.3390/math12121920.
[57] Z.-J. Peng, C. Zhang, and Y.-X. Tian, “Crude Oil Price Time Series Forecasting: A Novel Approach Based on Variational Mode Decomposition, Time-Series Imaging, and Deep Learning,” IEEE Access, vol. 11, pp. 82216–82231, 2023, doi: 10.1109/ACCESS.2023.3301576.
[58] E. Kim et al., “Innovative strategies for protein content determination in dried laver (Porphyra spp.): Evaluation of preprocessing methods and machine learning algorithms through short-wave infrared imaging,” Food Chem. X, vol. 23, no. 1, p. 101763, Oct. 2024, doi: 10.1016/j.fochx.2024.101763.
[59] X. Wang, C. Qian, Z. Zhao, J. Li, and M. Jiao, “A Novel Gas Recognition Algorithm for Gas Sensor Array Combining Savitzky–Golay Smooth and Image Conversion Route,” Chemosensors, vol. 11, no. 2, p. 96, Jan. 2023, doi: 10.3390/chemosensors11020096.
[60] R. H. F. Peshawa J. Muhammad Ali, “Data Normalization and Standardization: A Technical Report,” The Machine Learning Lab. at Koya University Koya, Erbil, Iraq. Accessed: Mar. 01, 2026. [Online]. Available at: https://www.researchgate.net/publication/340579135_Data_Normalization_and_Standardization_A_Technical_Report.
[61] D. Singh and B. Singh, “Investigating the impact of data normalization on classification performance,” Appl. Soft Comput., vol. 97, no. December, p. 105524, Dec. 2020, doi: 10.1016/j.asoc.2019.105524.
[62] H. W. Herwanto, A. N. Handayani, A. P. Wibawa, K. L. Chandrika, and K. Arai, “Comparison of Min-Max, Z-Score and Decimal Scaling Normalization for Zoning Feature Extraction on Javanese Character Recognition,” in 2021 7th International Conference on Electrical, Electronics and Information Engineering (ICEEIE), IEEE, Oct. 2021, pp. 1–3. doi: 10.1109/ICEEIE52663.2021.9616665.
[63] D. Chicco, M. J. Warrens, and G. Jurman, “The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation,” PeerJ Comput. Sci., vol. 7, p. e623, Jul. 2021, doi: 10.7717/peerj-cs.623.
[64] X. Shen, H. Liu, G. Qiu, Y. Liu, J. Liu, and S. Fan, “Interpretable Interval Prediction-Based Outlier-Adaptive Day-Ahead Electricity Price Forecasting Involving Cross-Market Features,” IEEE Trans. Ind. Informatics, vol. 20, no. 5, pp. 7124–7137, May 2024, doi: 10.1109/TII.2024.3355105.
[65] O. Kisi, “Wavelet Regression Model as an Alternative to Neural Networks for River Stage Forecasting,” Water Resour. Manag., vol. 25, no. 2, pp. 579–600, Jan. 2011, doi: 10.1007/s11269-010-9715-8.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

























Download