Improving stroke diagnosis accuracy using hyperparameter optimized deep learning

(1) * Tessy Badriyah Mail (Politeknik Elektronika Negeri Surabaya (PENS), Indonesia)
(2) Dimas Bagus Santoso Mail (Politeknik Elektronika Negeri Surabaya (PENS), Indonesia)
(3) Iwan Syarif Mail (Politeknik Elektronika Negeri Surabaya (PENS), Indonesia)
(4) Daisy Rahmania Syarif Mail (University of Cologne, Germany)
*corresponding author

Abstract


Stroke may cause death for anyone, including youngsters. One of the early stroke detection techniques is a Computerized Tomography (CT) scan. This research aimed to optimize hyperparameter in Deep Learning, Random Search and Bayesian Optimization for determining the right hyperparameter. The CT scan images were processed by scaling, grayscale, smoothing, thresholding, and morphological operation. Then, the images feature was extracted by the Gray Level Co-occurrence Matrix (GLCM). This research was performed a feature selection to select relevant features for reducing computing expenses, while deep learning based on hyperparameter setting was used to the data classification process. The experiment results showed that the Random Search had the best accuracy, while Bayesian Optimization excelled in optimization time.

Keywords


Feature Selection; Deep Learning; Hyperparameter Optimization

   

DOI

https://doi.org/10.26555/ijain.v5i3.427
      

Article metrics

Abstract views : 642 | PDF views : 130

   

Cite

   

Full Text

Download

References


[1] L. S. Brunner, Brunner & Suddarth’s textbook of medical-surgical nursing, vol. 1. Lippincott Williams & Wilkins, 2010., available at: Google Scholar.

[2] J. T. Marbun, Seniman, and U. Andayani, “Classification of stroke disease using convolutional neural network,” J. Phys. Conf. Ser., vol. 978, p. 012092, Mar. 2018, doi: 10.1088/1742-6596/978/1/012092.

[3] C.-Y. Hung, W.-C. Chen, P.-T. Lai, C.-H. Lin, and C.-C. Lee, “Comparing Deep Neural Network and Other Machine Learning Algorithms for Stroke Prediction in a Large-Scale Population-Based Electronic Medical Claims Database,” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2017, pp. 3110–3113, doi: 10.1109/EMBC.2017.8037515.

[4] G. A. P. Singh and P. K. Gupta, “Performance analysis of various machine learning-based approaches for detection and classification of lung cancer in humans,” Neural Comput. Appl., vol. 31, no. 10, pp. 6863–6877, 2019, doi: 10.1007/s00521-018-3518-x.

[5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.

[6] A. Esteva et al., “A guide to deep learning in healthcare,” 2019, doi: 10.1038/s41591-018-0316-z.

[7] L. Deng and J. C. Platt, “Ensemble deep learning for speech recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2014, available at: Google Scholar.

[8] S. Gollapudi and S. Gollapudi, “Deep Learning for Computer Vision,” 2019, doi: 10.1007/978-1-4842-4261-2_3.

[9] H. Assodiky, I. Syarif, and T. Badriyah, “Deep learning algorithm for arrhythmia detection,” in 2017 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC), 2017, pp. 26–32, doi: 10.1109/KCIC.2017.8228452.

[10] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in International Conference on Learning Representations (ICLR), 2015, vol. 5, available at: Google Scholar.

[11] J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., vol. 13, no. Feb, pp. 281–305, 2012, available at: Google Scholar.

[12] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in Advances in neural information processing systems, 2012, pp. 2951–2959, available at : Google Scholar.

[13] H. Cui and J. Bai, “A new hyperparameters optimization method for convolutional neural networks,” Pattern Recognit. Lett., 2019, doi: 10.1016/j.patrec.2019.02.009.

[14] C. Di Francescomarino et al., “Genetic algorithms for hyperparameter optimization in predictive business process monitoring,” Inf. Syst., vol. 74, pp. 67–83, May 2018, doi: 10.1016/j.is.2018.01.003.

[15] N. Q. K. Le, T.-T. Huynh, E. K. Y. Yapp, and H.-Y. Yeh, “Identification of clathrin proteins by incorporating hyperparameter optimization in deep learning and PSSM profiles,” Comput. Methods Programs Biomed., vol. 177, pp. 81–88, Aug. 2019, doi: 10.1016/j.cmpb.2019.05.016.

[16] F. J. Martinez-de-Pison, R. Gonzalez-Sendino, A. Aldama, J. Ferreiro-Cabello, and E. Fraile-Garcia, “Hybrid methodology based on Bayesian optimization and GA-PARSIMONY to search for parsimony models by combining hyperparameter optimization and feature selection,” Neurocomputing, vol. 354, pp. 20–26, Aug. 2019, doi: 10.1016/j.neucom.2018.05.136.

[17] P. Balaprakash, M. Salim, T. Uram, V. Vishwanath, and S. Wild, “DeepHyper: Asynchronous Hyperparameter Search for Deep Neural Networks,” in 2018 IEEE 25th International Conference on High Performance Computing (HiPC), 2018, pp. 42–51, doi: 10.1109/HiPC.2018.00014.

[18] P. Neary, “Automatic Hyperparameter Tuning in Deep Convolutional Neural Networks Using Asynchronous Reinforcement Learning,” in 2018 IEEE International Conference on Cognitive Computing (ICCC), 2018, pp. 73–77, doi: 10.1109/ICCC.2018.00017.

[19] R. J. Borgli, H. Kvale Stensland, M. A. Riegler, and P. Halvorsen, “Automatic Hyperparameter Optimization for Transfer Learning on Medical Image Datasets Using Bayesian Optimization,” in 2019 13th International Symposium on Medical Information and Communication Technology (ISMICT), 2019, pp. 1–6, doi: 10.1109/ISMICT.2019.8743779.

[20] S. S. Talathi, “Hyper-parameter optimization of deep convolutional networks for object recognition,” in 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 3982–3986, doi: 10.1109/ICIP.2015.7351553.

[21] N. N. Y. Vo, X. He, S. Liu, and G. Xu, “Deep learning for decision making and the optimization of socially responsible investments and portfolio,” Decis. Support Syst., vol. 124, p. 113097, Sep. 2019, doi: 10.1016/j.dss.2019.113097.

[22] X. Dong, J. Shen, W. Wang, Y. Liu, L. Shao, and F. Porikli, “Hyperparameter Optimization for Tracking with Continuous Deep Q-Learning,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 518–527, doi: 10.1109/CVPR.2018.00061.

[23] C. Yao, D. Cai, J. Bu, and G. Chen, “Pre-training the deep generative models with adaptive hyperparameter optimization,” Neurocomputing, vol. 247, pp. 144–155, Jul. 2017, doi: 10.1016/j.neucom.2017.03.058.

[24] Y. Yoo, “Hyperparameter optimization of deep neural network using univariate dynamic encoding algorithm for searches,” Knowledge-Based Syst., vol. 178, pp. 74–83, Aug. 2019, doi: 10.1016/j.knosys.2019.04.019.

[25] A. Candelieri et al., “Tuning hyperparameters of a SVM-based water demand forecasting system through parallel global optimization,” Comput. Oper. Res., vol. 106, pp. 202–209, Jun. 2019, doi: 10.1016/j.cor.2018.01.013.

[26] H. Laanaya, F. Abdallah, H. Snoussi, and C. Richard, “Learning general Gaussian kernel hyperparameters of SVMs using optimization on symmetric positive-definite matrices manifold,” Pattern Recognit. Lett., vol. 32, no. 13, pp. 1511–1515, Oct. 2011, doi: 10.1016/j.patrec.2011.05.009.

[27] R. Laref, E. Losson, A. Sava, and M. Siadat, “On the optimization of the support vector machine regression hyperparameters setting for gas sensors array applications,” Chemom. Intell. Lab. Syst., vol. 184, pp. 22–27, Jan. 2019, doi: 10.1016/j.chemolab.2018.11.011.

[28] V. Strijov and G. W. Weber, “Nonlinear regression model generation using hyperparameter optimization,” Comput. Math. with Appl., vol. 60, no. 4, pp. 981–988, Aug. 2010, doi: 10.1016/j.camwa.2010.03.021.

[29] E. S. Tellez, D. Moctezuma, S. Miranda-Jiménez, and M. Graff, “An automated text categorization framework based on hyperparameter optimization,” Knowledge-Based Syst., vol. 149, pp. 110–123, Jun. 2018, doi: 10.1016/j.knosys.2018.03.003.

[30] P. Tsirikoglou, S. Abraham, F. Contino, C. Lacor, and G. Ghorbaniasl, “A hyperparameters selection technique for support vector regression models,” Appl. Soft Comput., vol. 61, pp. 139–148, Dec. 2017, doi: 10.1016/j.asoc.2017.07.017.

[31] R. G. Mantovani, A. L. D. Rossi, E. Alcobaça, J. Vanschoren, and A. C. P. L. F. de Carvalho, “A meta-learning recommender system for hyperparameter tuning: Predicting when tuning improves SVM classifiers,” Inf. Sci. (Ny)., vol. 501, pp. 193–221, Oct. 2019, doi: 10.1016/j.ins.2019.06.005.

[32] W.-Y. Lee, S.-M. Park, and K.-B. Sim, “Optimal hyperparameter tuning of convolutional neural networks based on the parameter-setting-free harmony search algorithm,” Optik (Stuttg)., vol. 172, pp. 359–367, Nov. 2018, doi: 10.1016/j.ijleo.2018.07.044.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by Informatics Department - Universitas Ahmad Dahlan,  UTM Big Data Centre - Universiti Teknologi Malaysia, and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org, andri.pranolo@tif.uad.ac.id (paper handling issues)
     ijain@uad.ac.id, andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0