Medical Image Classification with Hand-Designed or Machine-Designed Texture Descriptors: A Performance Evaluation
Abstract
Accurate diagnosis and early detection of various disease conditions are key to improving living conditions in any community. The existing framework for medical image classification depends largely on advanced digital image processing and machine (deep) learning techniques for significant improvement. In this paper, the performance of traditional hand-designed texture descriptors within the image-based learning paradigm is evaluated in comparison with machine-designed descriptors (extracted from pre-trained Convolution Neural Networks). Performance is evaluated, with respect to speed, accuracy and storage requirements, based on four popular medical image datasets. The experiments reveal an increased accuracy with machine-designed descriptors in most cases, though at a higher computational cost. It is therefore necessary to consider other parameters for tradeoff depending on the application being considered.
Keywords
Medical image classification Deep learning Convolution Neural Network Texture descriptorsNotes
Acknowledgements
This research was sponsored under the Centre for Research Innovation and Development Research Grant of Covenant University. The authors who shared their MATLAB code and toolboxes for LBP, LTP, LPQ, CLBP, RICLBP and MatConvNet are appreciated.
References
- 1.Nanni, L., Lumini, A., Brahnam, S.: Local binary patterns variants as texture descriptors for medical image analysis. Artif. Intell. Med. 49(2), 117–125 (2010)CrossRefGoogle Scholar
- 2.Lumini, A., Nanni, L., Brahnam, S.: Multilayer descriptors for medical image classification. Comput. Biol. Med. 72, 239–247 (2016)CrossRefGoogle Scholar
- 3.Verma, B., McLeod, P., Klevansky, A.: Classification of benign and malignant patterns in digital mammograms for the diagnosis of breast cancer. Expert Syst. Appl. 37(4), 3344–3351 (2010)CrossRefGoogle Scholar
- 4.Moura, D.C., López, M.A.G.: An evaluation of image descriptors combined with clinical data for breast cancer diagnosis. Int. J. Comput. Assist. Radiol. Surg. 8(4), 561–574 (2013)CrossRefGoogle Scholar
- 5.Liu, D., Wang, S., Huang, D., Deng, G., Zeng, F., Chen, H.: Medical image classification using spatial adjacent histogram based on adaptive local binary patterns. Comput. Biol. Med. 72, 185–200 (2016)CrossRefGoogle Scholar
- 6.Kopans, D.B.: The positive predictive value of mammography. Am. J. Roentgenol. 158, 521–526 (1992)CrossRefGoogle Scholar
- 7.Knutzen, A.M., Gisvold, J.J.: Likelihood of malignant disease for various categories of mammographically detected, nonpalpable breast lesions. In: Mayo Clinic Proceedings (1993)Google Scholar
- 8.Jiang, Y., et al.: Malignant and benign clustered microcalcifications: automated feature analysis and classification. Radiology 198, 671–678 (1996)CrossRefGoogle Scholar
- 9.Hobson, P., Lovell, B.C., Percannella, G., Saggese, A., Vento, M., Wiliem, A.: HEp-2 staining pattern recognition at cell and specimen levels: datasets, algorithms and results. Pattern Recognit. Lett. 82, 12–22 (2016)CrossRefGoogle Scholar
- 10.Rubin, G.D.: Data explosion: the challenge of multidetector-row CT. Eur. J. Radiol. 2, 74–80 (2000)CrossRefGoogle Scholar
- 11.Shen, W., Zhou, M., Yang, F., Yang, C., Tian, J.: Multi-scale convolutional neural networks for lung nodule classification. In: Ourselin, S., Alexander, D.C., Westin, C.-F., Cardoso, M.J. (eds.) IPMI 2015. LNCS, vol. 9123, pp. 588–599. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19992-4_46CrossRefGoogle Scholar
- 12.Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35(5), 1299–1312 (2016)CrossRefGoogle Scholar
- 13.Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D.D., Chen, M.: Medical image classification with convolutional neural network. In: 2014 13th International Conference on Control Automation Robotics & Vision (ICARCV), pp. 844–848 (2014)Google Scholar
- 14.Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016)MathSciNetCrossRefGoogle Scholar
- 15.Spanhol, F.A., Oliveira, L.S., Petitjean, C., Heutte, L.: A dataset for breast cancer histopathological image classification. IEEE Trans. Biomed. Eng. 63, 1455–1462 (2016)CrossRefGoogle Scholar
- 16.Gao, Z., Wang, L., Zhou, L., Zhang, J.: HEp-2 cell image classification with deep convolutional neural networks. IEEE J. Biomed. Heal. Inform. 21(2), 416–428 (2017)CrossRefGoogle Scholar
- 17.Bello-Cerezo, R., Bianconi, F., Cascianelli, S., Fravolini, M.L., di Maria, F., Smeraldi, F.: Hand-designed local image descriptors vs. off-the-shelf CNN-based features for texture classification: an experimental comparison. In: De Pietro, G., Gallo, L., Howlett, R.J., Jain, L.C. (eds.) KES-IIMSS 2017. SIST, vol. 76, pp. 1–10. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-59480-4_1CrossRefGoogle Scholar
- 18.Liu, L., Fieguth, P., Guo, Y., Wang, X., Pietikäinen, M.: Local binary features for texture classification: taxonomy and experimental study. Pattern Recognit. 62, 135–160 (2017)CrossRefGoogle Scholar
- 19.Hertel, L., Barth, E., Kaster, T., Martinetz, T.: Deep convolutional neural networks as generic feature extractors. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2015, September (2015)Google Scholar
- 20.Chebira, A., et al.: A multiresolution approach to automated classification of protein subcellular location images. BMC Bioinform. 8, 210 (2007)CrossRefGoogle Scholar
- 21.Jantzen, J., Norup, J., Dounias, G., Bjerregaard, B.: Pap-smear benchmark data for pattern classification. In: Proceedings of NiSIS 2005 Nature Inspired Smart Information System, pp. 1–9 (2005)Google Scholar
- 22.Boland, M.V., Murphy, R.F.: A neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells. Bioinformatics 17(12), 1213–1223 (2001)CrossRefGoogle Scholar
- 23.Nanni, L., Ghidoni, S., Brahnam, S.: Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognit. 71, 158–172 (2017)CrossRefGoogle Scholar
- 24.Liu, L., Chen, J., Fieguth, P., Zhao, G., Chellappa, R., Pietikainen, M.: A survey of recent advances in texture representation. arXiv Preprint arXiv:1801.10324 (2018)
- 25.Ojala, T., Pietikäinen, M., Mäenpää, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)CrossRefzbMATHGoogle Scholar
- 26.Krizhevsky, A., Sulskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems vol. 60, no. 6, pp. 84–90 (2012)Google Scholar
- 27.Vedaldi, A., Lenc, K.: Convolutional neural networks for MATLAB (2014)Google Scholar
- 28.Adetiba, E., Olugbara, O.O.: Lung cancer prediction using neural network ensemble with histogram of oriented gradient genomic features. Sci. World J. 2015, 17p (2015)Google Scholar
- 29.Adetiba, E., Olugbara, O.O.: Improved classification of lung cancer using radial basis function neural network with affine transforms of voss representation. PLoS ONE 10(12), e0143542 (2015)CrossRefGoogle Scholar
- 30.Cimpoi, M., Maji, S., Kokkinos, I., Vedaldi, A.: Deep filter banks for texture recognition, description, and segmentation. Int. J. Comput. Vis. 118(1), 65–94 (2016)MathSciNetCrossRefGoogle Scholar
- 31.Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks, pp. 1097–1105 (2012)Google Scholar