Skip to main content

A Benchmark Study of DeepLabV3+, U-Net++, and Attention U-Net for Blood Cell Segmentation

Author(s): Clara Lavita Angelina 1 , 2 , ORCID https://orcid.org/0000-0002-0375-3759 , Ali Rospawan 1 , 3 , ORCID https://orcid.org/0000-0001-5667-2269
Author(s) information:
1 Department of Electronics Engineering, Politeknik Manufaktur Negeri Bangka Belitung, Sungailiat, 33211, Indonesia
2 Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology, Yunlin, 64002, Taiwan
3 Department of Electrical Engineering, National Chung Hsing University, Taichung, 402202, Taiwan

Corresponding author

Cell segmentation is a critical process in biomedical image analysis. This study evaluated the performance of three state-of-the-art deep learning models—DeepLabV3+, U-Net++, and Attention U-Net—using the Blood Cell Count and Detection (BCCD) dataset, which contains annotated images of blood cells. The models were rigorously analyzed through qualitative and quantitative evaluations, employing accuracy, precision, recall, and F1 score metrics. The results demonstrated that all three models achieved high segmentation performance, with U-Net++ excelling in accuracy (0.9740), precision (0.9511), and F1 score (0.9576), Attention U-Net achieving the highest recall (0.9692), and DeepLabV3+ providing a balanced performance across all metrics. Qualitative analyses revealed that U-Net++ delivered superior segmentation of complex and overlapping cell structures, while Attention U-Net exhibited exceptional sensitivity to dense cell clusters. Training and validation curves of the models confirmed their stability and generalizability, indicating efficient convergence without overfitting. By highlighting the unique strengths of each model, this study emphasized the importance of selecting architectures tailored to specific tasks. Future research will expand the application of these models to diverse biomedical datasets to further advance automated image analysis and its impact on healthcare outcomes.

Ronneberger, O.; Fischer, P.; Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), 9351, 234–241. https://doi.org/10.1007/978-3-319-24574-4_28.

Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88. https://doi.org/10.1016/j.media.2017.07.005.

Reardon, S. (2014). Personalized medicine: The next frontier in cancer treatment. Nature, 508, 24–26. https://doi.org/10.1038/508024a.

Gonzalez, R.C.; Woods, R.E. (2002). Digital Image Processing, 2nd Eds.; Prentice Hall: New Jersey, USA.

Suzuki, K. (2017). Overview of deep learning in medical imaging. Radiological Physics and Technology, 10, 257–273. https://doi.org/10.1007/s12194-017-0406-5.

Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. (2019). UNet++: A nested U-Net architecture for medical image segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, 3, 3–11. https://doi.org/10.1007/978-3-030-11723-8_1.

Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. (2018). DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.

Zhou, Z.; Tajbakhsh, N. (2020). UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Transactions on Medical Imaging, 39, 1856–1867. https://doi.org/10.1109/TMI.2019.2959609.

Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; Glocker, B.; Rueckert, D. (2018). Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. https://doi.org/10.48550/arXiv.1804.03999.

Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9, 62–66. https://doi.org/10.1109/TSMC.1979.4310076.

Beucher, S.; Lantuéjoul, C. (1979). Use of watersheds in contour detection. International Workshop on Image Processing, Real-Time Edge and Motion Detection, France.

BCCD Dataset with Mask. (accessed on 24 January 2025) Available online: https://www.kaggle.com/datasets/jeetblahiri/bccd-dataset-with-mask.

Rahman, S.; Alam, M.; Hasan, K. (2023). Attention U-Net for wildfire boundary detection. Journal of Disaster Analysis, 18, 43–51. https://doi.org/10.1007/s10514-023-01987-1.

Badrinarayanan, V.; Kendall, A.; Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.

He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. (2017). Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2961–2969. https://doi.org/10.1109/ICCV.2017.322.

Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306. https://doi.org/10.48550/arXiv.2102.04306.

Lin, H.; Zhang, P.; Huang, J. (2023). Adaptation of DeepLabV3+ for pancreas segmentation. Advances in Medical Imaging, 7, 78–86. https://doi.org/10.1016/j.ami.2023.03.007.

Shrestha, P.; Dahal, S.; Koirala, R. (2023). Evaluating deep learning models for skin lesion segmentation. Dermatology Research, 9, 89–102. https://doi.org/10.1016/j.dres.2023.01.001.

Zhang, Q.; Zhao, M.; Li, J. (2022). Flood mapping from remote sensing images using U-Net++. Environmental Monitoring Review, 9, 112–119. https://doi.org/10.1016/j.emr.2022.02.005.

Gupta, P.; Singh, R.; Chauhan, S. (2023). Pigmented lesion classification using DeepLabV3+. Skin Imaging Research, 10, 56–65. https://doi.org/10.1016/j.skinres.2023.01.005.

Kumar, R.; Singh, S.; Verma, A. (2022). Brain tumor segmentation using DeepLabV3+ on BraTS dataset. Neurocomputing Advances, 23, 456–467. https://doi.org/10.1016/j.nca.2022.04.005.

Zhao, Y.; Chen, L.; Tang, W. (2023). Multi-modal brain MRI segmentation using U-Net++. Journal of Biomedical Engineering Research, 14, 134–146. https://doi.org/10.1016/j.jber.2023.04.002.

Lee, K.; Park, J.; Kim, S. (2023). Comparative analysis of deep learning models for liver segmentation. Journal of Medical Imaging, 15, 200–212. https://doi.org/10.1007/s11520-023-01782-6.

Zhao, Y.; Zhang, H.; Liu, W. (2022). Semantic segmentation for autonomous driving using DeepLabV3+. Journal of Autonomous Systems, 12, 56–65. https://doi.org/10.1016/j.jas.2022.03.002.

Wang, X.; Hu, Z.; Li, Y. (2023). Glioblastoma segmentation using Attention U-Net with enhanced feature extraction. Journal of Medical Imaging Advances, 15, 200–212. https://doi.org/10.1016/j.jmia.2023.03.009.

Tran, T.Q.; Nguyen, D.T.; Vo, M.N. (2023). Basal cell carcinoma segmentation using Attention U-Net. International Journal of Dermatology Research, 18, 104–113. https://doi.org/10.1016/j.ijdres.2023.02.007.

Cheng, Z.; Xu, L.; Zhao, Q. (2023). U-Net++ for pancreas segmentation: A sensitivity analysis. Biomedical Imaging Review, 8, 156–164. https://doi.org/10.1016/j.bir.2023.02.006.

Wu, H.; Lin, Y.; Zeng, R. (2023). Melanoma segmentation using U-Net++: A comparative study. Dermatology Image Analysis Journal, 5, 67–74. https://doi.org/10.1007/s13513-023-09876-2.

Oktay, O.; Schlemper, J.; Rueckert, D. (2020). Pancreas segmentation using Attention U-Net. Medical Imaging Applications, 12, 34–45. https://doi.org/10.10459-020-09865-4.

About this article

SUBMITTED: 21 February 2025
ACCEPTED: 24 March 2025
PUBLISHED: 29 March 2025
SUBMITTED to ACCEPTED: 31 days
DOI: https://doi.org/10.53623/gisa.v5i1.607

Cite this article
Angelina, C. L., & Rospawan, A. (2025). A Benchmark Study of DeepLabV3+, U-Net++, and Attention U-Net for Blood Cell Segmentation. Green Intelligent Systems and Applications, 5(1), 61–73. https://doi.org/10.53623/gisa.v5i1.607
Keywords
Accessed
32
Citations
0
Share this article