Self-supervised pre-training of CNNs for flatness defect classification in the steelworks industry

(1) * Filippo Galli Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
(2) Antonio Ritacco Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
(3) Giacomo Lanciano Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
(4) Marco Vannocci Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
(5) Valentina Colla Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
(6) Marco Vannucci Mail (Scuola Superiore Sant’Anna, via Moruzzi 1, Italy)
*corresponding author

Abstract


Classification of surface defects in the steelworks industry plays a significant role in guaranteeing the quality of the products. From an industrial point of view, a serious concern is represented by the hot-rolled products shape defects and particularly those concerning the strip flatness. Flatness defects are typically divided into four sub-classes depending on which part of the strip is affected and the corresponding shape. In the context of this research, the primary objective is evaluating the improvements of exploiting the self-supervised learning paradigm for defects classification, taking advantage of unlabelled, real, steel strip flatness maps. Different pre-training methods are compared, as well as architectures, taking advantage of well-established neural subnetworks, such as Residual and Inception modules. A systematic approach in evaluating the different performances guarantees a formal verification of the self-supervised pre-training paradigms evaluated hereafter. In particular, pre-training neural networks with the EgoMotion meta-algorithm shows classification improvements over the AutoEncoder technique, which in turn is better performing than a Glorot weight initialization.

Keywords


Self-supervision; Steelworks; Deep learning; CNN

   

DOI

https://doi.org/10.26555/ijain.v6i1.410
      

Article metrics

Abstract views : 1717 | PDF views : 333

   

Cite

   

Full Text

Download

References


[1] V. B. Ginzburg, Flat Rolling Fundamentals, 2000, doi: 10.1201/9781482277357.

[2] A. Bhaduri, “Rolling,” 2018, doi: 10.1007/978-981-10-7209-3_12.

[3] M. Vannocci et al., “Flatness Defect Detection and Classification in Hot Rolled Steel Strips Using Convolutional Neural Networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, doi: 10.1007/978-3-030-20518-8_19.

[4] G. Wu, H. Kwak, S. Jang, K. Xu, and J. Xu, “Design of online surface inspection system of hot rolled strips,” in Proceedings of the IEEE International Conference on Automation and Logistics, ICAL 2008, 2008, doi: 10.1109/ICAL.2008.4636548.

[5] S. Ghorai, A. Mukherjee, M. Gangadaran, and P. K. Dutta, “Automatic defect detection on hot-rolled flat steel products,” IEEE Trans. Instrum. Meas., 2013, doi: 10.1109/TIM.2012.2218677.

[6] P. Caleb and M. Steuer, “Classification of surface defects on hot rolled steel using adaptive learning methods,” Int. Conf. Knowledge-Based Intell. Electron. Syst. Proceedings, KES, 2000, doi: 10.1109/kes.2000.885769.

[7] G. Wu, K. Xu, and J. Xu, “Application of a new feature extraction and optimization method to surface defect recognition of cold rolled strips,” J. Univ. Sci. Technol. Beijing Miner. Metall. Mater. (Eng Ed), 2007, doi: 10.1016/S1005-8850(07)60086-3.

[8] S. Cateni, V. Colla, and G. Nastasi, “A multivariate fuzzy system applied for outliers detection,” J. Intell. Fuzzy Syst., 2013, doi: 10.3233/IFS-2012-0607.

[9] S. Cateni, V. Colla, and M. Vannucci, “A hybrid feature selection method for classification purposes,” in Proceedings - UKSim-AMSS 8th European Modelling Symposium on Computer Modelling and Simulation, EMS 2014, 2014, doi: 10.1109/EMS.2014.44.

[10] S. Cateni, V. Colla, and M. Vannucci, “A genetic algorithm-based approach for selecting input variables and setting relevant network parameters of a SOM-based classifier,” Int. J. Simul. Syst. Sci. Technol., 2011, available at: Google Scholar.

[11] A. Borselli, V. Colla, M. Vannucci, and M. Veroli, “A fuzzy inference system applied to defect detection in flat steel production,” in 2010 IEEE World Congress on Computational Intelligence, WCCI 2010, 2010, doi: 10.1109/FUZZY.2010.5584036.

[12] A. Borselli, V. Colla, and M. Vannucci, “Surface defects classification in steel products: A comparison between different artificial intelligence-based approaches,” in Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Applications, AIA 2011, 2011, doi: 10.2316/P.2011.717-068.

[13] M. Vannucci, V. Colla, M. Sgarbi, and O. Toscanelli, “Thresholded Neural Networks for Sensitive Industrial Classification Tasks,” 2009, pp. 1320–1327, doi: 10.1007/978-3-642-02478-8_165.

[14] J. Brandenburger, V. Colla, G. Nastasi, F. Ferro, C. Schirm, and J. Melcher, “Big Data Solution for Quality Monitoring and Improvement on Flat Steel Production,” IFAC-PapersOnLine, 2016, doi: 10.1016/j.ifacol.2016.10.096.

[15] M. Appio, A. Ardesi, and A. Lugnan, “Automatic surface inspection in steel products ensures safe, cost-efficient and timely defect detection in production,” in AISTech - Iron and Steel Technology Conference Proceedings, 2018, doi: 10.5151/1983-4764-31378.

[16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, available at: Google Scholar.

[17] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., 2018, doi: 10.1109/TPAMI.2017.2699184.

[18] A. Creswell and A. A. Bharath, “Denoising Adversarial Autoencoders,” IEEE Trans. Neural Networks Learn. Syst., 2019, doi: 10.1109/TNNLS.2018.2852738.

[19] A. Kumar et al., “Ask me anything: Dynamic memory networks for natural language processing,” in 33rd International Conference on Machine Learning, ICML 2016, 2016, available at: Google Scholar.

[20] S. Arik et al., “Deep voice: Real-time neural text-to-speech,” in 34th International Conference on Machine Learning, ICML 2017, 2017, available at: Google Scholar.

[21] H. Mhaskar, Q. Liao, and T. Poggio, “When and why are deep networks better than shallow ones?,” in 31st AAAI Conference on Artificial Intelligence, AAAI 2017, 2017, available at: Google Scholar.

[22] S. Wu, W. Wei, and L. Zhang, “Comparison of machine learning algorithms for handwritten digit recognition,” in Communications in Computer and Information Science, 2018, doi: 10.1007/978-981-13-1651-7_47.

[23] S. J. Pan and Q. Yang, “A survey on transfer learning,” 2010, doi: 10.1109/TKDE.2009.191.

[24] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, doi: 10.1007/978-3-030-01424-7_27.

[25] R. Stewart and S. Ermon, “Label-free supervision of neural networks with physics and domain knowledge,” in 31st AAAI Conference on Artificial Intelligence, AAAI 2017, 2017, available at: Google Scholar.

[26] M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, doi: 10.1007/978-3-319-46466-4_5.

[27] G. Larsson, M. Maire, and G. Shakhnarovich, “Colorization as a proxy task for visual understanding,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, doi: 10.1109/CVPR.2017.96.

[28] P. Agrawal, J. Carreira, and J. Malik, “Learning to see by moving,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, doi: 10.1109/ICCV.2015.13.

[29] D. H. Ballard, “Modular Learning in Neural Networks,” Aaai, 1987, available at: Google Scholar.

[30] Y. Bengio, “Learning deep architectures for AI,” Found. Trends Mach. Learn., 2009, doi: 10.1561/2200000006.

[31] M. Guarascio, G. Manco, and E. Ritacco, “Deep learning,” 2018, doi: 10.1016/B978-0-12-809633-8.20352-X.

[32] A. Makhzani and B. Frey, “k-Sparse autoencoders,” in 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, 2014, available at: Google Scholar.

[33] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, “Stacked denoising autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion,” J. Mach. Learn. Res., 2010, available at: Google Scholar.

[34] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, doi: 10.1109/CVPR.2016.308.

[35] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, doi: 10.1109/CVPR.2016.90.

[36] D. A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” in 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016, available at: Google Scholar.

[37] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015, available at: Google Scholar.

[38] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Journal of Machine Learning Research, 2010, available at: Google Scholar.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0