Exploration of hybrid deep learning algorithms for covid-19 mrna vaccine degradation prediction system

(1) Soon Hwai Ing Mail (Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis, Malaysia)
(2) * Azian Azamimi Abdullah Mail (Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis, Malaysia)
(3) Mohd Yusoff Mashor Mail (Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis, Malaysia)
(4) Zeti-Azura Mohamed-Hussein Mail (Centre for Bioinformatics Research & Department of Applied Physics, Universiti Kebangsaan Malaysia, Malaysia)
(5) Zeehaida Mohamed Mail (School of Medical Sciences, Universiti Sains Malaysia, Malaysia)
(6) Wei Chern Ang Mail (Clinical Research Centre & Department of Pharmacy, Hospital Tuanku Fauziah, Ministry of Health Malaysia, Malaysia)
*corresponding author

Abstract


Coronavirus causes a global pandemic that has adversely affected public health, the economy, including every life aspect. To manage the spread, innumerable measurements are gathered. Administering vaccines is considered to be among the precautionary steps under the blueprint. Among all vaccines, the messenger ribonucleic acid (mRNA) vaccines provide notable effectiveness with minimal side effects. However, it is easily degraded and limits its application. Therefore, considering the cruciality of predicting the degradation rate of the mRNA vaccine, this prediction study is proposed. In addition, this study compared the hybridizing sequence of the hybrid model to identify its influence on prediction performance. Five models are created for exploration and prediction on the COVID-19 mRNA vaccine dataset provided by Stanford University and made accessible on the Kaggle community platform employing the two deep learning algorithms, Long Short-Term Memory (LSTM) as well as Gated Recurrent Unit (GRU). The Mean Columnwise Root Mean Square Error (MCRMSE) performance metric was utilized to assess each model’s performance. Results demonstrated that both GRU and LSTM are befitting for predicting the degradation rate of COVID-19 mRNA vaccines. Moreover, performance improvement could be achieved by performing the hybridization approach. Among Hybrid_1, Hybrid_2, and Hybrid_3, when trained with Set_1 augmented data, Hybrid_3 with the lowest training error (0.1257) and validation error (0.1324) surpassed the other two models; the same for model training with Set_2 augmented data, scoring 0.0164 and 0.0175 MCRMSE for training error and validation error, respectively. The variance in results obtained by hybrid models from experimenting claimed hybridizing sequence of algorithms in hybrid modeling should be concerned.

   

DOI

https://doi.org/10.26555/ijain.v8i3.950
      

Article metrics

Abstract views : 1053 | PDF views : 198

   

Cite

   

Full Text

Download

References


[1] N. Zhu et al., “A Novel Coronavirus from Patients with Pneumonia in China, 2019,” N. Engl. J. Med., vol. 382, no. 8, pp. 727–733, Feb. 2020, doi:10.1056/NEJMoa2001017.

[2] A. Wadhwa, A. Aljabbari, A. Lokras, C. Foged, and A. Thakur, “Opportunities and Challenges in the Delivery of mRNA-Based Vaccines,” Pharmaceutics, vol. 12, no. 2, p. 102, Jan. 2020, doi: 10.3390/pharmaceutics12020102.

[3] J. Abbasi, “COVID-19 and mRNA Vaccines—First Large Test for a New Approach,” JAMA, vol. 324, no. 12, p. 1125, Sep. 2020, doi: 10.1001/jama.2020.16866.

[4] N. Dumpa et al., “Stability of Vaccines,” AAPS PharmSciTech, vol. 20, no. 2, pp. 1–11, Feb. 2019, doi: 10.1208/s12249-018-1254-2.

[5] D. J. A. Crommelin, T. J. Anchordoquy, D. B. Volkin, W. Jiskoot, and E. Mastrobattista, “Addressing the Cold Reality of mRNA Vaccine Stability,” J. Pharm. Sci., vol. 110, no. 3, pp. 997–1001, Mar. 2021, doi: 10.1016/j.xphs.2020.12.006.

[6] A. Singhal, “Predicting Hydroxyl Mediated Nucleophilic Degradation and Molecular Stability of RNA Sequences through the Application of Deep Learning Methods,” Nov. 2020, Accessed: Jan. 10, 2020. [Online]. Available: http://arxiv.org/abs/2011.05136.

[7] S. Asif Imran, M. Tariqul Islam, C. Shahnaz, M. Tafhimul Islam, O. Tawhid Imam, and M. Haque, “COVID-19 mRNA Vaccine Degradation Prediction using Regularized LSTM Model,” in 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), Dec. 2020, pp. 328–331, doi: 10.1109/WIECON-ECE52138.2020.9398044.

[8] Y. Wang, “Predicting the Degradation of COVID-19 mRNA Vaccine with Graph Convolutional Networks,” in 2021 6th International Conference on Machine Learning Technologies, Apr. 2021, pp. 111–116, doi: 10.1145/3468891.3468907.

[9] A. Muneer, S. M. Fati, N. Arifin Akbar, D. Agustriawan, and S. Tri Wahyudi, “iVaccine-Deep: Prediction of COVID-19 mRNA vaccine degradation using deep learning,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 9, pp. 7419–7432, Oct. 2022, doi: 10.1016/j.jksuci.2021.10.001.

[10] T. S. Qaid, H. Mazaar, M. S. Alqahtani, A. A. Raweh, and W. Alakwaa, “Deep sequence modelling for predicting COVID-19 mRNA vaccine degradation,” PeerJ Comput. Sci., vol. 7, pp. 1–21, Jun. 2021, doi: 10.7717/peerj-cs.597.

[11] S. H. Ing, A. A. Abdullah, N. H. Harun, and S. Kanaya, “COVID-19 mRNA Vaccine Degradation Prediction Using LR and LGBM Algorithms,” J. Phys. Conf. Ser., vol. 1997, no. 1, p. 012005, Aug. 2021, doi: 10.1088/1742-6596/1997/1/012005.

[12] S. H. Ing, A. A. Abdullah, and S. Kanaya, “Development of COVID-19 mRNA Vaccine Degradation Prediction System,” in 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sep. 2021, pp. 449–454, doi: 10.1109/3ICT53449.2021.9582052.

[13] H. K. Wayment-Steele et al., “Theoretical basis for stabilizing messenger RNA through secondary structure design,” Nucleic Acids Res., vol. 49, no. 18, pp. 10604–10617, Oct. 2021, doi: 10.1093/nar/gkab764.

[14] “OpenVaccine: COVID-19 mRNA Vaccine Degradation Prediction | Kaggle.” Accessed Jan. 10, 2020, Available : https://www.kaggle.com/competitions/stanford-covid-vaccine/data.

[15] M. Marouf et al., “Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks,” Nat. Commun., vol. 11, no. 1, p. 166, Jan. 2020, doi: 10.1038/s41467-019-14018-z.

[16] T. Li, R. Zuo, X. Zhao, and K. Zhao, “Mapping prospectivity for regolith-hosted REE deposits via convolutional neural network with generative adversarial network augmented data,” Ore Geol. Rev., vol. 142, p. 104693, Mar. 2022, doi: 10.1016/j.oregeorev.2022.104693.

[17] C. F. G. Dos Santos and J. P. Papa, “Avoiding Overfitting: A Survey on Regularization Methods for Convolutional Neural Networks,” ACM Comput. Surv., vol. 54, no. 10s, pp. 1–25, Jan. 2022, doi: 10.1145/3510413.

[18] V. Lalitha and B. Latha, “A review on remote sensing imagery augmentation using deep learning,” Mater. Today Proc., vol. 62, pp. 4772–4778, Jan. 2022, doi: 10.1016/j.matpr.2022.03.341.

[19] Y. Wang, S. Luo, and H. Wu, “Defect detection of solar cell based on data augmentation,” J. Phys. Conf. Ser., vol. 1952, no. 2, p. 022010, Jun. 2021, doi: 10.1088/1742-6596/1952/2/022010.

[20] “GRU+LSTM with 48k augmentation | Kaggle.” https://www.kaggle.com/code/mathurinache/gru-lstm-with-48k-augmentation/data (accessed Jan. 10, 2023).

[21] T. Bayrak and H. Ogul, “Data Integration for gene expression prediction,” in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Sep. 2018, pp. 1–6, doi: 10.1109/IDAP.2018.8620915.

[22] Asniar, N. U. Maulidevi, and K. Surendro, “SMOTE-LOF for noise identification in imbalanced data classification,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 6, pp. 3413–3423, Jun. 2022, doi: 10.1016/J.JKSUCI.2021.01.014.

[23] H. Sun, J. Sun, K. Zhao, L. Wang, and K. Wang, “Data-Driven ICA-Bi-LSTM-Combined Lithium Battery SOH Estimation,” Math. Probl. Eng., vol. 2022, pp. 1–8, Mar. 2022, doi: 10.1155/2022/9645892.

[24] X. B. Jin, W. T. Gong, J. L. Kong, Y. T. Bai, and T. L. Su, “A Variational Bayesian Deep Network with Data Self-Screening Layer for Massive Time-Series Data Forecasting,” Entropy 2022, Vol. 24, Page 335, vol. 24, no. 3, p. 335, Feb. 2022, doi: 10.3390/E24030335.

[25] P. Cheng, J. Wang, X. Zeng, P. Bruniaux, and X. Tao, “Motion comfort analysis of tight-fitting sportswear from multi-dimensions using intelligence systems,” Text. Res. J., vol. 92, no. 11–12, pp. 1843–1866, Jun. 2022, doi: 10.1177/00405175211070611.

[26] Z. Qu, L. Su, X. Wang, S. Zheng, X. Song, and X. Song, “A Unsupervised Learning Method of Anomaly Detection Using GRU,” in 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Jan. 2018, pp. 685–688, doi: 10.1109/BigComp.2018.00126.

[27] R. Nassif and M. W. Fahkr, “Supervised Topic Modeling Using Word Embedding with Machine Learning Techniques,” in 2019 International Conference on Advances in the Emerging Computing Technologies (AECT), Feb. 2020, pp. 1–6, doi: 10.1109/AECT47998.2020.9194177.

[28] H. Yang and S. Liu, “Water quality prediction in sea cucumber farming based on a GRU neural network optimized by an improved whale optimization algorithm,” PeerJ Comput. Sci., vol. 8, p. e1000, May 2022, doi: 10.7717/peerj-cs.1000.

[29] C. Hu, S. Martin, and R. Dingreville, “Accelerating phase-field predictions via recurrent neural networks learning the microstructure evolution in latent space,” Comput. Methods Appl. Mech. Eng., vol. 397, p. 115128, Jul. 2022, doi: 10.1016/j.cma.2022.115128.

[30] H. Maru, T. Chandana, and D. Naik, “Comparitive study of GRU and LSTM cells based Video Captioning Models,” in 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Jul. 2021, pp. 1–5, doi: 10.1109/ICCCNT51525.2021.9579565.

[31] N. Zafar, I. U. Haq, J. U. R. Chughtai, and O. Shafiq, “Applying Hybrid Lstm-Gru Model Based on Heterogeneous Data Sources for Traffic Speed Prediction in Urban Areas,” Sensors 2022, Vol. 22, Page 3348, vol. 22, no. 9, p. 3348, Apr. 2022, doi: 10.3390/S22093348.

[32] C. Padubidri, A. Kamilaris, S. Karatsiolis, and J. Kamminga, “Counting sea lions and elephants from aerial photography using deep learning with density maps,” Anim. Biotelemetry, vol. 9, no. 1, pp. 1–10, Dec. 2021, doi: 10.1186/s40317-021-00247-x.

[33] V. Venugopal, J. Joseph, M. V. Das, and M. K. Nath, “DTP-Net: A convolutional neural network model to predict threshold for localizing the lesions on dermatological macro-images,” Comput. Biol. Med., vol. 148, p. 105852, Sep. 2022, doi: 10.1016/j.compbiomed.2022.105852.

[34] C. F. Tsai and M. L. Chen, “Credit rating by hybrid machine learning techniques,” Appl. Soft Comput., vol. 10, no. 2, pp. 374–380, Mar. 2010, doi: 10.1016/J.ASOC.2009.08.003.

[35] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” Jul. 2012, doi: 10.48550/arXiv.1207.0580.

[36] A. Labach, H. Salehinejad, and S. Valaee, “Survey of Dropout Methods for Deep Neural Networks,” Apr. 2019, doi: 10.48550/arXiv.1904.13310.

[37] H. Alsayadi, A. Abdelhamid, I. Hegazy, and Z. Taha, “Data Augmentation for Arabic Speech Recognition Based on End-to-End Deep Learning,” Int. J. Intell. Comput. Inf. Sci., vol. 21, no. 2, pp. 50–64, Jul. 2021, doi: 10.21608/ijicis.2021.73581.1086.

[38] A. Zunino, S. A. Bargal, P. Morerio, J. Zhang, S. Sclaroff, and V. Murino, “Excitation Dropout: Encouraging Plasticity in Deep Neural Networks,” Int. J. Comput. Vis., vol. 129, no. 4, pp. 1139–1152, Apr. 2021, doi: 10.1007/s11263-020-01422-y.

[39] S. Z. K. Tan, R. Du, J. A. U. Perucho, S. S. Chopra, V. Vardhanabhuti, and L. W. Lim, “Dropout in Neural Networks Simulates the Paradoxical Effects of Deep Brain Stimulation on Memory,” Front. Aging Neurosci., vol. 12, p. 273, Sep. 2020, doi: 10.3389/fnagi.2020.00273.

[40] Y. Chen and Z. Yi, “Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks,” Neurocomputing, vol. 450, pp. 354–361, Aug. 2021, doi: 10.1016/j.neucom.2021.04.047.

[41] S. Chen et al., “Rainfall Forecasting in Sub-Sahara Africa-Ghana using LSTM Deep Learning Approach,” Int. J. Eng. Res. Technol., vol. 10, no. 3, pp. 464–470, Apr. 2021, [Online]. Available: https://www.ijert.org/rainfall-forecasting-in-sub-sahara-africa-ghana-using-lstm-deep-learning-approach .

[42] A. Malekian and N. Chitsaz, “Concepts, procedures, and applications of artificial neural network models in streamflow forecasting,” in Advances in Streamflow Forecasting, Elsevier, 2021, pp. 115–147, doi: 10.1016/B978-0-12-820673-7.00003-2.

[43] Y. O. Ouma, C. O. Okuku, and E. N. Njau, “Use of Artificial Neural Networks and Multiple Linear Regression Model for the Prediction of Dissolved Oxygen in Rivers: Case Study of Hydrographic Basin of River Nyando, Kenya,” Complexity, vol. 2020, pp. 1–23, May 2020, doi: 10.1155/2020/9570789.

[44] S. Tiryaki and A. Aydın, “An artificial neural network model for predicting compression strength of heat treated woods and comparison with a multiple linear regression model,” Constr. Build. Mater., vol. 62, pp. 102–108, Jul. 2014, doi: 10.1016/j.conbuildmat.2014.03.041.

[45] N. F. Salehuddin, M. B. Omar, R. Ibrahim, and K. Bingi, “A Neural Network-Based Model for Predicting Saybolt Color of Petroleum Products,” Sensors 2022, Vol. 22, Page 2796, vol. 22, no. 7, p. 2796, Apr. 2022, doi: 10.3390/S22072796.

[46] A. Khan, H. Hwang, and H. S. Kim, “Synthetic Data Augmentation and Deep Learning for the Fault Diagnosis of Rotating Machines,” Mathematics, vol. 9, no. 18, p. 2336, Sep. 2021, doi: 10.3390/math9182336.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0