
(2) Issam Amellal

(3) Mohammed Rida Ech-charrat

(4) Hamid Seghiouer

*corresponding author
AbstractEffective supply chain management is pivotal for enhancing customer satisfaction and driving competitiveness and profitability in the automotive service and spare parts distribution sector. Our research introduces an innovative approach, integrating game theory, BiLSTM-Attention deep learning, and Reinforcement Learning (RL) to refine supply and pricing strategies within this domain. Focusing on Moroccan automobile companies, we utilized Enterprise Resource Planning (ERP) system data to forecast customer behavior using a BiLSTM model enhanced with an Attention mechanism. This predictive model achieved a Mean Squared Error (MSE) of 0.0525 and an R² value of 0.896, indicating high accuracy and an ability to explain substantial variance in customer behavior. To further our analysis, we incorporated reinforcement learning, evaluating three algorithms: Q-learning, Deep Q-Networks (DQN), and SARSA. Our findings demonstrate SARSA's superior performance in our context, attributed to its adeptness at navigating the dynamic environment of the automotive supply chain. By synergizing the predictive power of the BiLSTM-Attention model with the strategic optimization capabilities of reinforcement learning, particularly SARSA, our study offers a comprehensive framework for automotive companies to enhance their supply chain strategies, balancing profitability and customer satisfaction effectively in a rapidly evolving industry sector
KeywordsSupply chain management; BiLSTM-Attention model; Reinforcement learning; Game theory; Decision making
|
DOIhttps://doi.org/10.26555/ijain.v10i3.1351 |
Article metricsAbstract views : 317 | PDF views : 42 |
Cite |
Full Text![]() |
References
[1] S. Chopra and P. Meindl, “Supply Chain Management. Strategy, Planning & Operation,” in Das Summa Summarum des Management, Wiesbaden: Gabler, 2007, pp. 265–275, doi: 10.1007/978-3-8349-9320-5_22.
[2] C. Martin, Logistics and Supply Chain Management. p. 13, 2011. [Online]. Available at: https://industri.fatek.unpatti.ac.id/wp-content/uploads/2019/03/256-Logistics-Supply-Chain-Management-Martin-Christopher-Edisi-1.pdf.
[3] P. D. Larson, “Designing and Managing the Supply Chain: Concepts, Strategies, and Case Studies, David Simchi‐Levi Philip Kaminsky Edith Simchi‐Levi,” J. Bus. Logist., vol. 22, no. 1, pp. 259–261, Mar. 2001, doi: 10.1002/j.2158-1592.2001.tb00165.x.
[4] A. Amellal, I. Amellal, H. Seghiouer, and M. R. Ech-Charrat, “Improving Lead Time Forecasting and Anomaly Detection for Automotive Spare Parts with A Combined CNN-LSTM Approach,” Oper. Supply Chain Manag. An Int. J., vol. 16, no. 2, pp. 265–278, Jun. 2023, doi: 10.31387/oscm0530388.
[5] I. Amellal, A. Amellal, H. Seghiouer, and M. R. Ech-Charrat, “An integrated approach for modern supply chain management: Utilizing advanced machine learning models for sentiment analysis, demand forecasting, and probabilistic price prediction,” Decis. Sci. Lett., vol. 13, no. 1, pp. 237–248, 2024, doi: 10.5267/j.dsl.2023.9.003.
[6] F. Kurniawan, S. Sulaiman, S. Konate, and M. A. A. Abdalla, “Deep learning approaches for MIMO time-series analysis,” Int. J. Adv. Intell. Informatics, vol. 9, no. 2, p. 286, Jul. 2023, doi: 10.26555/ijain.v9i2.1092.
[7] H. Haviluddin and R. Alfred, “Multi-step CNN forecasting for COVID-19 multivariate time-series,” Int. J. Adv. Intell. Informatics, vol. 9, no. 2, p. 176, Jul. 2023, doi: 10.26555/ijain.v9i2.1080.
[8] C. Subakan, M. Ravanelli, S. Cornell, M. Bronzi, and J. Zhong, “Attention Is All You Need In Speech Separation,” in ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Jun. 2021, vol. 2021-June, pp. 21–25, doi: 10.1109/ICASSP39728.2021.9413901.
[9] Y. Oukdach, Z. Kerkaou, M. El Ansari, L. Koutti, A. Fouad El Ouafdi, and T. De Lange, “ViTCA-Net: a framework for disease detection in video capsule endoscopy images using a vision transformer and convolutional neural network with a specific attention mechanism,” Multimed. Tools Appl., pp. 1–20, Jan. 2024, doi: 10.1007/s11042-023-18039-1.
[10] X. Qiao et al., “A Event Extraction Method of Document-Level Based on the Self-attention Mechanism,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13656 LNCS, Springer, Cham, 2023, pp. 609–619, doi: 10.1007/978-3-031-20099-1_50.
[11] N. Moritz, T. Hori, and J. Le Roux, “Triggered Attention for End-to-end Speech Recognition,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, vol. 2019-May, pp. 5666–5670, doi: 10.1109/ICASSP.2019.8683510.
[12] G. Chaubey, P. R. Gavhane, D. Bisen, and S. K. Arjaria, “Customer purchasing behavior prediction using machine learning classification techniques,” J. Ambient Intell. Humaniz. Comput., vol. 14, no. 12, pp. 16133–16157, Dec. 2023, doi: 10.1007/s12652-022-03837-6.
[13] P. Kaushik, S. P. Singh Rathore, P. Kaur, H. Kumar, and N. Tyagi, “Leveraging Multiscale Adaptive Object Detection and Contrastive Feature Learning for Customer Behavior Analysis in Retail Settings,” Int. J. Recent Innov. Trends Comput. Commun., vol. 11, no. 6s, pp. 326–343, Jun. 2023, doi: 10.17762/ijritcc.v11i6s.6938.
[14] M. Z. Abedin, P. Hajek, T. Sharif, M. S. Satu, and M. I. Khan, “Modelling bank customer behaviour using feature engineering and classification techniques,” Res. Int. Bus. Financ., vol. 65, p. 101913, Apr. 2023, doi: 10.1016/j.ribaf.2023.101913.
[15] N. Fatehi, A. Politis, L. Lin, M. Stobby, and M. H. Nazari, “Machine Learning based Occupant Behavior Prediction in Smart Building to Improve Energy Efficiency,” in 2023 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Jan. 2023, pp. 1–5, doi: 10.1109/ISGT51731.2023.10066411.
[16] Z. Zhang, “Consumer behavior prediction and marketing strategy optimization based on big data analysis,” Appl. Math. Nonlinear Sci., vol. 9, no. 1, pp. 1–14, Jan. 2024, doi: 10.2478/amns.2023.2.01630.
[17] J. Wen and Z. Wang, “Short-term load forecasting with bidirectional LSTM-attention based on the sparrow search optimisation algorithm,” Int. J. Comput. Sci. Eng., vol. 26, no. 1, p. 20, 2023, doi: 10.1504/IJCSE.2023.129154.
[18] W. Gomez, F.-K. Wang, and Z. E. Amogne, “Electricity Load and Price Forecasting Using a Hybrid Method Based Bidirectional Long Short-Term Memory with Attention Mechanism Model,” Int. J. Energy Res., vol. 2023, no. 1, pp. 1–18, Feb. 2023, doi: 10.1155/2023/3815063.
[19] M. O. Ahmed and I. H. El-adaway, “An integrated game-theoretic and reinforcement learning modeling for multi-stage construction and infrastructure bidding,” Constr. Manag. Econ., vol. 41, no. 3, pp. 183–207, Mar. 2023, doi: 10.1080/01446193.2022.2124528.
[20] G. Cui, Q.-S. Jia, and X. Guan, “Energy Management of Networked Microgrids With Real-Time Pricing by Reinforcement Learning,” IEEE Trans. Smart Grid, vol. 15, no. 1, pp. 570–580, Jan. 2024, doi: 10.1109/TSG.2023.3281935.
[21] X. Li, F. Luo, and C. Li, “Multi-agent deep reinforcement learning-based autonomous decision-making framework for community virtual power plants,” Appl. Energy, vol. 360, p. 122813, Apr. 2024, doi: 10.1016/j.apenergy.2024.122813.
[22] A. C. Real, G. P. Luz, J. M. C. Sousa, M. C. Brito, and S. M. Vieira, “Optimization of a photovoltaic-battery system using deep reinforcement learning and load forecasting,” Energy AI, vol. 16, p. 100347, May 2024, doi: 10.1016/j.egyai.2024.100347.
[23] H. Markgraf and M. Althoff, “Safe Multi-Agent Reinforcement Learning for Price-Based Demand Response,” in 2023 IEEE PES Innovative Smart Grid Technologies Europe (ISGT EUROPE), Oct. 2023, pp. 1–6, doi: 10.1109/ISGTEUROPE56780.2023.10407281.
[24] J. Lussange, S. Vrizzi, S. Bourgeois-Gironde, S. Palminteri, and B. Gutkin, “Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model,” Comput. Econ., vol. 61, no. 4, pp. 1523–1544, Apr. 2023, doi: 10.1007/s10614-022-10249-3.
[25] R. May and P. Huang, “A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets,” Appl. Energy, vol. 334, p. 120705, Mar. 2023, doi: 10.1016/j.apenergy.2023.120705.
[26] Y. Han, X. Zhang, J. Zhang, Q. Cui, S. Wang, and Z. Han, “Multi-Agent Reinforcement Learning Enabling Dynamic Pricing Policy for Charging Station Operators,” in 2019 IEEE Global Communications Conference (GLOBECOM), Dec. 2019, pp. 1–6, doi: 10.1109/GLOBECOM38437.2019.9013999.
[27] L. Yu, C. Zhang, J. Jiang, H. Yang, and H. Shang, “Reinforcement learning approach for resource allocation in humanitarian logistics,” Expert Syst. Appl., vol. 173, p. 114663, Jul. 2021, doi: 10.1016/j.eswa.2021.114663.
[28] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/NECO.1997.9.8.1735.
[29] K. Smagulova and A. P. James, “A survey on LSTM memristive neural network architectures and applications,” Eur. Phys. J. Spec. Top., vol. 228, no. 10, pp. 2313–2324, Oct. 2019, doi: 10.1140/epjst/e2019-900046-x.
[30] G. Brauwers and F. Frasincar, “A General Survey on Attention Mechanisms in Deep Learning,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 4, pp. 3279–3298, Apr. 2023, doi: 10.1109/TKDE.2021.3126456.
[31] V. N. John and M. Oskar, Theory of games and economic behavior, 2nd ed. United States: Princeton University Pres, p. 625,1947. [Online]. Available at: https://psycnet.apa.org/record/1947-03159-000.
[32] A. Nowé, P. Vrancx, and Y.-M. De Hauwere, “Game Theory and Multi-agent Reinforcement Learning,” in Adaptation, Learning, and Optimization, vol. 12, Springer, Berlin, Heidelberg, 2012, pp. 441–470, doi: 10.1007/978-3-642-27645-3_14.
[33] C. J. C. H. Watkins and P. Dayan, “Technical Note: Q-Learning,” Mach. Learn., vol. 8, no. 3, pp. 279–292, 1992, doi: https://doi.org/10.1023/A:1022676722315, doi: 10.1023/A:1022676722315.
[34] R. S. Sutton and A. G. Barto, Reinforcement learning : an introduction, 2nd ed. Cambridge: The MIT Press, p. 526, 2015. [Online]. Available at: https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf.
[35] V. Mnih et al., “Playing Atari with Deep Reinforcement Learning,” arxiv Mach. Learn., pp. 1–9, Dec. 2013. [Online]. Available: https://arxiv.org/abs/1312.5602v1.
[36] A. El Zaar, N. Benaya, T. Bakir, A. Mansouri, and A. El Allati, “Prediction of US 30‐years‐treasury‐bonds movement and trading entry point using the robust 1DCNN‐BiLSTM‐XGBoost algorithm,” Expert Syst., vol. 41, no. 1, p. e13459, Jan. 2024, doi: 10.1111/exsy.13459.
[37] Z. Jamshidzadeh, M. Ehteram, and H. Shabanian, “Bidirectional Long Short-Term Memory (BILSTM) - Support Vector Machine: A new machine learning model for predicting water quality parameters,” Ain Shams Eng. J., vol. 15, no. 3, p. 102510, Mar. 2024, doi: 10.1016/j.asej.2023.102510.
[38] S. Lv, K. Wang, H. Yang, and P. Wang, “An origin–destination passenger flow prediction system based on convolutional neural network and passenger source-based attention mechanism,” Expert Syst. Appl., vol. 238, p. 121989, Mar. 2024, doi: 10.1016/j.eswa.2023.121989.
[39] G. E. P. Box, G. C. Reinsel, G. M. Jenkins, and G. M. Ljung, Time series analysis: forecasting and control, 5th ed. Canada: John Wiley & Sons, Inc, p. 720, 2016. [Online]. Available at: http://link.springer.com/10.1007/978-3-319-59379-1%0Ahttp://dx.doi.org/10.1016/B978-0-12-420070-8.00002-7%0Ahttp://dx.doi.org/10.1016/j.ab.2015.03.024.
[40] J.-Y. Lee, A. Rahman, S. Huang, A. D. Smith, and S. Katipamula, “On-policy learning-based deep reinforcement learning assessment for building control efficiency and stability,” Sci. Technol. Built Environ., vol. 28, no. 9, pp. 1150–1165, Oct. 2022, doi: 10.1080/23744731.2022.2094729.
[41] T. Cui, N. Du, X. Yang, and S. Ding, “Multi-period portfolio optimization using a deep reinforcement learning hyper-heuristic approach,” Technol. Forecast. Soc. Change, vol. 198, p. 122944, Jan. 2024, doi: 10.1016/j.techfore.2023.122944.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0