(2) Thitipong Thitipong (Vincent Mary School of Science & Technology, Assumption University, Bangkok, Thailand)
*corresponding author
AbstractThe reliance on data and the high cost of data labelling are the main problems facing deep learning today. Active learning aims to make the best model with as few training samples as possible. Previous query strategies for active learning have mainly used the uncertainty and diversity criteria, and have not considered the data distribution's multi-granularity. To extract more valid information from the samples, we use three-way decisions to select uncertain samples and propose a multi-granularity active learning method (MGAL). The model divides the unlabeled samples into three parts: positive, negative and boundary region. Through active iterative training samples, the decision delay of the boundary domain can reduce the decision cost. We validated the model on five UCI datasets and the CIFAR10 dataset. The experimental results show that the cost of three-way decisions is lower than that of two-way decisions. The multi-granularity active learning achieves good classification results, which validates the model. In this case study, the reader can learn about the ideas and methods of the three-way decision theory applied to deep learning.
KeywordsThree-way decision; Multi-grained features; Active learning; Unlabeled samples; Classification algorithm
|
DOIhttps://doi.org/10.26555/ijain.v9i2.1036 |
Article metricsAbstract views : 584 | PDF views : 183 |
Cite |
Full TextDownload |
References
[1] S. Liu, Y. Wang, Q. Yu, H. Liu, and Z. Peng, “CEAM-YOLOv7: Improved YOLOv7 Based on Channel Expansion and Attention Mechanism for Driver Distraction Behavior Detection,” IEEE Access, vol. 10, pp. 129116–129124, 2022, doi: 10.1109/ACCESS.2022.3228331.
[2] W. Wang et al., “InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions,” arXiv, pp.1-19, Nov. 2022, doi : 10.48550/arXiv.2211.05778.
[3] D. Su and P. Fung, “Improving Spoken Question Answering Using Contextualized Word Representation,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc., vol. 2020-May, pp. 8004–8008, May 2020, doi: 10.1109/ICASSP40776.2020.9053979.
[4] K. H. Choi and J. E. Ha, “Random Swin Transformer,” Int. Conf. Control. Autom. Syst., vol. 2022-November, pp. 1611–1614, 2022, doi: 10.23919/ICCAS55662.2022.10003789.
[5] W. Wang et al., “mmLayout: Multi-grained MultiModal Transformer for Document Understanding,” Association for Computing Machinery (ACM)., vol. 2022-Oct, pp. 4877–4886, doi: 10.48550/arXiv.2209.08569.
[6] J. Li, M. Wang, and X. Gong, “Transformer Based Multi-Grained Features for Unsupervised Person Re-Identification,” Proc. - 2023 IEEE/CVF Winter Conf. Appl. Comput. Vis. Work. WACVW 2023, pp. 42–50, 2023, doi: 10.1109/WACVW58289.2023.00009.
[7] W. Guoyin, Y. Hong, W. Guoyin, and Y. Hong, “Multi-Granularity Cognitive Computing—A New Model for Big Data Intelligent Computing,” Front. Data Domputing, vol. 1, no. 2, pp. 75–85, Jan. 2020, doi: 10.11871/jfdc.issn.2096-742X.2019.02.007.
[8] J. Chen, Z. Du, X. Sun, S. Zhao, and Y. Zhang, “A multi-granular network representation learning method,” Granul. Comput., vol. 6, no. 1, pp. 59–68, Jan. 2021, doi: 10.1007/s41066-019-00194-2.
[9] J. Chen, P. Wang, J. Liu, and Y. Qian, “Label Relation Graphs Enhanced Hierarchical Residual Network for Hierarchical Multi-Granularity Classification,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 4848–4857, 2022, doi: 10.1109/CVPR52688.2022.00481.
[10] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1-14, Sep. 2015, doi: 10.48550/arXiv.1409.1556.
[11] S. H. Gao, M. M. Cheng, K. Zhao, X. Y. Zhang, M. H. Yang, and P. Torr, “Res2Net: A New Multi-Scale Backbone Architecture,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 2, pp. 652–662, Feb. 2021, doi: 10.1109/TPAMI.2019.2938758.
[12] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 936–944, Nov. 2017, doi: 10.1109/CVPR.2017.106.
[13] X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer, “Scaling Vision Transformers,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 12094–12103, 2022, doi: 10.1109/CVPR52688.2022.01179.
[14] D. Yoo and I. S. Kweon, “Learning loss for active learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 93–102, Jun. 2019, doi: 10.1109/CVPR.2019.00018.
[15] J. Choi, I. Elezi, H. J. Lee, C. Farabet, and J. M. Alvarez, “Active Learning for Deep Object Detection via Probabilistic Modeling,” Proc. IEEE Int. Conf. Comput. Vis., pp. 10244–10253, 2021, doi: 10.1109/ICCV48922.2021.01010.
[16] J. Shao, Q. Wang, and F. Liu, “Learning to sample: An active learning framework,” Proc. - IEEE Int. Conf. Data Mining, ICDM, vol. 2019-November, pp. 538–547, Nov. 2019, doi: 10.1109/ICDM.2019.00064.
[17] Y. Yao, “Three-way decision and granular computing,” Int. J. Approximate. Reasoning., vol. 103, pp. 107–123, Dec. 2018, doi: 10.1016/J.IJAR.2018.09.005.
[18] A. Campagner, F. Cabitza, and D. Ciucci, “Three-Way Decision for Handling Uncertainty in Machine Learning: A Narrative Review,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12179 LNAI, pp. 137–152, 2020, doi: 10.1007/978-3-030-52705-1_10.
[19] Y. Yao, “Tri-level thinking: models of three-way decision,” Int. J. Mach. Learn. Cybern., vol. 11, no. 5, pp. 947–959, May 2020, doi: 10.1007/S13042-019-01040-2.
[20] J. Qian, D. W. Tang, Y. Yu, X. B. Yang, and S. Gao, “Hierarchical sequential three-way decision model,” Int. J. Approx. Reason., vol. 140, pp. 156–172, Jan. 2022, doi: 10.1016/j.ijar.2021.10.004.
[21] W. Qian, Y. Zhou, J. Qian, and Y. Wang, “Cost-sensitive sequential three-way decision for information system with fuzzy decision,” Int. J. Approx. Reason., vol. 149, pp. 85–103, Oct. 2022, doi: 10.1016/j.ijar.2022.07.006.
[22] P. Liu, L. Wang, R. Ranjan, G. He, and L. Zhao, “A Survey on Active Deep Learning: From Model Driven to Data Driven,” Association for Computing Machinery (ACM)., vol. 54, no. 10, pp. 1-34, Sep. 2022, doi: 10.1145/3510414.
[23] C. Mayer and R. Timofte, “Adversarial sampling for active learning,” Proc. - 2020 IEEE Winter Conf. Appl. Comput. Vision, WACV 2020, pp. 3060–3068, Mar. 2020, doi: 10.1109/WACV45572.2020.9093556.
[24] Y. Yang and M. Loog, “A benchmark and comparison of active learning for logistic regression,” Pattern Recognit., vol. 83, pp. 401–415, Nov. 2018, doi: 10.1016/j.patcog.2018.06.004.
[25] Dua, D. and Graff, C. "UCI Machine Learning Repository". Irvine, CA: University of California, School of Information and Computer Science,2019, Available at : archive.ics.uci.edu/ml.
[26] P. Ren et al., “A Survey of Deep Active Learning,” Association for Computing Machinery (ACM)., vol. 54, no. 9, pp. 1-40, Oct. 2021, doi: 10.1145/3472291.
[27] X. Yan et al., “A clustering-based active learning method to query informative and representative samples,” Appl. Intell., vol. 52, no. 11, pp. 13250–13267, Sep. 2022, doi: 10.1007/S10489-021-03139-Y.
[28] B. Settles and M. Craven, “An Analysis of Active Learning Strategies for Sequence Labeling Tasks,” Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). ACL Press, 2008, pp. 1070–1079, doi: 10.5555/1613715.1613855.
[29] K. Wang, D. Zhang, Y. Li, R. Zhang, and L. Lin, “Cost-Effective Active Learning for Deep Image Classification,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 12, pp. 2591–2600, Dec. 2017, doi: 10.1109/TCSVT.2016.2589879.
[30] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” Computer Science University Of Toronto, pp. 1-60, 2009. Available at : http://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0