Development and Evaluation of D-Attention Unet Model Using 3D and Continuous Visual Context for Needle Detection in Continuous Ultrasound Images

연속 초음파영상에서의 바늘 검출을 위한 3D와 연속 영상문맥을 활용한 D-Attention Unet 모델 개발 및 평가

  • Lee, So Hee (Dept. Biomedical Engineering, Konyang University) ;
  • Kim, Jong Un (Dept. Biomedical Engineering, Konyang University Graduate School) ;
  • Lee, Su Yeol (Advanced Medical Technology Laboratory, Healcerion Co.) ;
  • Ryu, Jeong Won (Advanced Medical Technology Laboratory, Healcerion Co.) ;
  • Choi, Dong Hyuk (Dept. Biomedical Engineering, Konyang University) ;
  • Tae, Ki Sik (Dept. Biomedical Engineering, Konyang University)
  • Received : 2020.09.03
  • Accepted : 2020.10.25
  • Published : 2020.10.31


Needle detection in ultrasound images is sometimes difficult due to obstruction of fat tissues. Accurate needle detection using continuous ultrasound (CUS) images is a vital stage of treatment planning for tissue biopsy and brachytherapy. The main goal of the study is classified into two categories. First, new detection model, i.e. D-Attention Unet, is developed by combining the context information of 3D medical data and CUS images. Second, the D-Attention Unet model was compared with other models to verify its usefulness for needle detection in continuous ultrasound images. The continuous needle images taken with ultrasonic waves were converted into still images for dataset to evaluate the performance of the D-Attention Unet. The dataset was used for training and testing. Based on the results, the proposed D-Attention Unet model showed the better performance than other 3 models (Unet, D-Unet and Attention Unet), with Dice Similarity Coefficient (DSC), Recall and Precision at 71.9%, 70.6% and 73.7%, respectively. In conclusion, the D-Attention Unet model provides accurate needle detection for US-guided biopsy or brachytherapy, facilitating the clinical workflow. Especially, this kind of research is enthusiastically being performed on how to add image processing techniques to learning techniques. Thus, the proposed method is applied in this manner, it will be more effective technique than before.



  1. Kamilaris, Andreas, Francesc X, Prenafeta-Boldu. Deep learning in agriculture: A survey. Computers and electronics in agriculture. 2018;147:70-90.
  2. Dinggang S, Wu G, Suk HI. Deep learning in medical image analysis. Annual review of biomedical engineering. 2017;19:221-48.
  3. Zhao ZQ, Zheng P, Xu S, Wu X. Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems 2019;30(11): 3212-32.
  4. Rajpurkar P, Irvin J, Zhu K, Yang B, Mehta H, Duan T, Ding D, Bagul A, Langlotz C, Shpanskaya K, P.Lungren M, Y.Ng A. Chexnet : Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv. 2017.
  5. Krizhevsky A, Sutskever J, Hinton G. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 2012;25:1097-05.
  6. Falk T, Mai D, Bensch R, Cicek O, Abdulkadir A, Marrakchi Y, Bohm A, Deubner J, Jackel Z, Seiwald K, Dovzhenko A, Tietz O, Bosco CD, Walsh S, Saltukoglu D, Tay TL, Prinz M, Palme K, Simons M, Diester L, Brox T, Ronneberger. Unet: deep learning for cell counting, detection, and morphometry. Nature methods. 2019;16:67-70.
  7. Ronneberger O, Fischer P, Brox T. Unet: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention. 2015;234-41.
  8. Weng Y, Zhou T, Li Y, Qiu X. Nas-unet: Neural architecture search for medical image segmentation. IEEE Access. 2019;7:44247-57.
  9. Dubost F, Bortsova G, Adams H, Ikram A, Niessen W.J., Vernooij M, de Bruijne M. GP-Unet: Lesion detection from weak labels with a 3D regression network. International Conference on Medical Image Computing and Computer-Assisted Intervention. 2017;214-21.
  10. Denil M, Shakibi B, Dinh L, Ranzato MA, de Freitas N. Predicting parameters in deep learning. In Advances in neural information processing systems. 2013;26:2148-56.
  11. Torralba A, Murphy KP, Freeman WT, Rubin MA. Contextbased vision system for place and object recognition. ICCV. 2003;3:273-80.
  12. Mottaghi R, Chen X, Liu X, Cho NG, Lee SW, Fidler S, Urtasun R, Yuille A. The Role of Context for Object Detection and Semantic Segmentation in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014;891-8.
  13. Zhou Y, Huang W, Dong P, Xia Y, Wang S. D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation. IEEE/ACM transactions on computational biology and bioinformatics. 2019.
  14. Jin Q, Meng Z, Pham TD, Chen Q, Wei L, Su R. DUNet: A deformable network for retinal vessel segmentation. Knowledge-Based Systems. 2019;178:149-62.
  15. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich, M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, Glocker B, Rueckert D. Attention Unet: Learning where to look for the pancreas. MIDL. 2018.
  16. Dong Y, Feng H, Xu Z, Chen Y, Li Q. Attention Res-Unet: an efficient shadow detection algorithm. Journal of ZheJiang University (Engineering Science). 2019;53(2):373-81.
  17. Sinha, Ashish, Dolz J. Multi-scale guided attention for medical image segmentation. arXiv. 2019.
  18. Ding M, Fenster A. A real-time biopsy needle segmentation technique using Hough Transform. Medical physics. 2003;30(8):2222-33.
  19. Mehrtash A, Ghafoorian M, Pernelle G, Ziaei A, Heslinga FG, Tuncali K, Fedorov A, Kikinis R, Tempany CM, Wells WM, Abolmaesumi P, Kapur T. Automatic needle segmentation and localization in MRI with 3-D convolutional neural networks: application to MRI-targeted prostate biopsy. IEEE transactions on medical imaging. 2018;38(4):1026-36.
  20. Zhang Y, He X, Tian Z, Jeong JJ, Lei Y, Wang T, Zeng Q, Jani A.B, Curran W.J, Patel P, Liu T, Yang X. Multi-Needle Detection in 3D Ultrasound Images Using Unsupervised Order-Graph Regularized Sparse Dictionary Learning. IEEE Transactions on Medical Imaging. 2020;39(7):2302-15.
  21. Lee SJ, Lee HS. Basic Study on the Effect of Number of Hidden Layers on Performance of Estimation Model of Compressive Strength of Concrete Using Deep Learning Algorithms. J Korea Inst Build Constr.. 2018;18(1):130-41.
  22. Yun JR, Chun SK, Kim HM, Kim UY. Object Recognition in $360^{\circ}$ Streaming Video. The Korean Society of Computer Information Conference. 2019;27(2):317-8.
  23. Bruzzone L, Prieto DF. Automatic analysis of the difference image for unsupervised change detection. IEEE Transactions on Geoscience and Remote sensing.2000;38(3):1171-82.