Improved point center algorithm for K-Means clustering to increase software defect prediction

(1) * Riski Annisa Mail (Universitas Bina Sarana Informatika, Indonesia)
(2) Didi Rosiyadi Mail (Research Center for Informatics, Indonesian Institute of Sciences (LIPI), Bandung, Indonesia; and Master Program of Computer Science STMIK Nusa Mandiri, Jakarta, Indonesia)
(3) Dwiza Riana Mail (STMIK Nusa Mandiri, Indonesia)
*corresponding author

Abstract


The k-means is a clustering algorithm that is often and easy to use. This algorithm is susceptible to randomly chosen centroid points so that it cannot produce optimal results. This research aimed to improve the k-means algorithm’s performance by applying a proposed algorithm called point center. The proposed algorithm overcame the random centroid value in k-means and then applied it to predict software defects modules’ errors. The point center algorithm was proposed to determine the initial centroid value for the k-means algorithm optimization. Then, the selection of X and Y variables determined the cluster center members. The ten datasets were used to perform the testing, of which nine datasets were used for predicting software defects. The proposed center point algorithm showed the lowest errors. It also improved the k-means algorithm’s performance by an average of 12.82% cluster errors in the software compared to the centroid value obtained randomly on the simple k-means algorithm. The findings are beneficial and contribute to developing a clustering model to handle data, such as to predict software defect modules more accurately.

Keywords


Algorithm, K-Means, Cluster, Centroid, Software defect

   

DOI

https://doi.org/10.26555/ijain.v6i3.484
      

Article metrics

Abstract views : 406 | PDF views : 56

   

Cite

   

Full Text

Download

References


[1] M. G. Siavvas, K. C. Chatzidimitriou, and A. L. Symeonidis, “QATCH - An adaptive framework for software product quality assessment,” Expert Syst. Appl., vol. 86, pp. 350–366, Nov. 2017, doi: 10.1016/j.eswa.2017.05.060.

[2] L. Qiao, X. Li, Q. Umer, and P. Guo, “Deep learning based software defect prediction,” Neurocomputing, vol. 385, pp. 100–110, Apr. 2020, doi: 10.1016/j.neucom.2019.11.067.

[3] X. Chen, D. Zhang, Y. Zhao, Z. Cui, and C. Ni, “Software defect number prediction: Unsupervised vs supervised methods,” Inf. Softw. Technol., vol. 106, pp. 161–181, Feb. 2019, doi: 10.1016/j.infsof.2018.10.003.

[4] G. K. Rajbahadur, S. Wang, Y. Kamei, and A. E. Hassan, “The Impact of Using Regression Models to Build Defect Classifiers,” in 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR), 2017, pp. 135–145, doi: 10.1109/MSR.2017.4.

[5] A. Majd, M. Vahidi-Asl, A. Khalilian, P. Poorsarvi-Tehrani, and H. Haghighi, “SLDeep: Statement-level software defect prediction using deep-learning model on static code features,” Expert Syst. Appl., vol. 147, p. 113156, Jun. 2020, doi: 10.1016/j.eswa.2019.113156.

[6] R. Moussa and D. Azar, “A PSO-GA approach targeting fault-prone software modules,” J. Syst. Softw., vol. 132, pp. 41–49, Oct. 2017, doi: 10.1016/j.jss.2017.06.059.

[7] A. Boucher and M. Badri, “Predicting Fault-Prone Classes in Object-Oriented Software: An Adaptation of an Unsupervised Hybrid SOM Algorithm,” in 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS), 2017, pp. 306–317, doi: 10.1109/QRS.2017.41.

[8] Z. Sun, J. Zhang, H. Sun, and X. Zhu, “Collaborative filtering based recommendation of sampling methods for software defect prediction,” Appl. Soft Comput., vol. 90, p. 106163, May 2020, doi: 10.1016/j.asoc.2020.106163.

[9] F. HUANG and B. LIU, “Software defect prevention based on human error theories,” Chinese J. Aeronaut., vol. 30, no. 3, pp. 1054–1070, Jun. 2017, doi: 10.1016/j.cja.2017.03.005.

[10] N. Li, M. Shepperd, and Y. Guo, “A systematic review of unsupervised learning techniques for software defect prediction,” Inf. Softw. Technol., vol. 122, p. 106287, Jun. 2020, doi: 10.1016/j.infsof.2020.106287.

[11] Q. Huang, X. Xia, and D. Lo, “Supervised vs Unsupervised Models: A Holistic Look at Effort-Aware Just-in-Time Defect Prediction,” in 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), 2017, pp. 159–170, doi: 10.1109/ICSME.2017.51.

[12] X. Chen, Y. Zhao, Q. Wang, and Z. Yuan, “MULTI: Multi-objective effort-aware just-in-time software defect prediction,” Inf. Softw. Technol., vol. 93, pp. 1–13, Jan. 2018, doi: 10.1016/j.infsof.2017.08.004.

[13] R. Chang, X. Shen, B. Wang, and Q. Xu, “A novel method for software defect prediction in the context of big data,” in 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA)(, 2017, pp. 100–104, doi: 10.1109/ICBDA.2017.8078785.

[14] A. Boucher and M. Badri, “Software metrics thresholds calculation techniques to predict fault-proneness: An empirical comparison,” Inf. Softw. Technol., vol. 96, pp. 38–67, Apr. 2018, doi: 10.1016/j.infsof.2017.11.005.

[15] S. Singh and R. Singla, “Classification of defective modules using object-oriented metrics,” Int. J. Intell. Syst. Technol. Appl., vol. 16, no. 1, p. 1, 2017, doi: 10.1504/IJISTA.2017.081311.

[16] M. Yan, X. Zhang, C. Liu, L. Xu, M. Yang, and D. Yang, “Automated change-prone class prediction on unlabeled dataset using unsupervised method,” Inf. Softw. Technol., vol. 92, pp. 1–16, Dec. 2017, doi: 10.1016/j.infsof.2017.07.003.

[17] M. Yan, Y. Fang, D. Lo, X. Xia, and X. Zhang, “File-Level Defect Prediction: Unsupervised vs. Supervised Models,” in 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2017, pp. 344–353, doi: 10.1109/ESEM.2017.48.

[18] J. Liu, Y. Zhou, Y. Yang, H. Lu, and B. Xu, “Code Churn: A Neglected Metric in Effort-Aware Just-in-Time Defect Prediction,” in 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), 2017, pp. 11–19, doi: 10.1109/ESEM.2017.8.

[19] E. Zhu, Y. Zhang, P. Wen, and F. Liu, “Fast and stable clustering analysis based on Grid-mapping K-means algorithm and new clustering validity index,” Neurocomputing, vol. 363, pp. 149–170, Oct. 2019, doi: 10.1016/j.neucom.2019.07.048.

[20] S. Khanmohammadi, N. Adibeig, and S. Shanehbandy, “An improved overlapping k-means clustering method for medical applications,” Expert Syst. Appl., vol. 67, pp. 12–18, Jan. 2017, doi: 10.1016/j.eswa.2016.09.025.

[21] A. Kaur, S. K. Pal, and A. P. Singh, “Hybridization of Chaos and Flower Pollination Algorithm over K-Means for data clustering,” Appl. Soft Comput., vol. 97, p. 105523, Dec. 2020, doi: 10.1016/j.asoc.2019.105523.

[22] A. Fadaei and S. H. Khasteh, “Enhanced K-means re-clustering over dynamic networks,” Expert Syst. Appl., vol. 132, pp. 126–140, Oct. 2019, doi: 10.1016/j.eswa.2019.04.061.

[23] P. Fränti and S. Sieranoja, “How much can k-means be improved by using better initialization and repeats?,” Pattern Recognit., vol. 93, pp. 95–112, Sep. 2019, doi: 10.1016/j.patcog.2019.04.014.

[24] H. Ismkhan, “I-k-means−+: An iterative clustering algorithm based on an enhanced version of the k-means,” Pattern Recognit., vol. 79, pp. 402–413, Jul. 2018, doi: 10.1016/j.patcog.2018.02.015.

[25] S. K. Majhi and S. Biswal, “Optimal cluster analysis using hybrid K-Means and Ant Lion Optimizer,” Karbala Int. J. Mod. Sci., vol. 4, no. 4, pp. 347–360, Dec. 2018, doi: 10.1016/j.kijoms.2018.09.001.

[26] N. Nidheesh, K. A. Abdul Nazeer, and P. M. Ameer, “An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data,” Comput. Biol. Med., vol. 91, pp. 213–221, Dec. 2017, doi: 10.1016/j.compbiomed.2017.10.014.

[27] R. S. Wahono, “A Systematic Literature Review of Software Defect Prediction : Research Trends , Datasets , Methods and Frameworks,” J. Softw. Eng., vol. 1, no. 1, pp. 1–16, 2015, Available at: Google Scholar

[28] R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Ann. Hum. Genet., vol. 7, no. 2, pp. 179–188, 1936, doi: 10.1111/j.1469-1809.1936.tb02137.x.

[29] X. Wu et al., “Top 10 algorithms in data mining,” Knowl. Inf. Syst., vol. 14, no. 1, pp. 1–37, Jan. 2008, doi: 10.1007/s10115-007-0114-2.

[30] S. F. Hussain and M. Haris, “A k-means based co-clustering (kCC) algorithm for sparse, high dimensional data,” Expert Syst. Appl., vol. 118, pp. 20–34, Mar. 2019, doi: 10.1016/j.eswa.2018.09.006.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by Informatics Department - Universitas Ahmad Dahlan,  UTM Big Data Centre - Universiti Teknologi Malaysia, and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: ijain@uad.ac.id (paper handling issues)
    info@ijain.org, andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0