• Title/Summary/Keyword: Attention U-Net

Search Result 33, Processing Time 0.021 seconds

Attention U-Net Based Palm Line Segmentation for Biometrics (생체인식을 위한 Attention U-Net 기반 손금 추출 기법)

  • Kim, InKi;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.89-91
    • /
    • 2022
  • 본 논문에서는 생체인식 수단 중 하나인 손금을 이용한 생체인식에서 Attention U-Net을 기반으로 손금을 추출하는 방법을 제안한다. 손바닥의 손금 중 주요선이라 불리는 생명선, 지능선, 감정선은 거의 변하지 않는 특징을 가지고 있다. 기존의 손금 추출 방법인 비슷한 색상에서 손금 추출, 제한된 Background에서 손금을 추출하는 것이 아닌 피부색과 비슷하거나, 다양한 Background에서 적용될 수 있다. 이를 통해 사용자를 인식하는 생체인식 방법에서 사용할 수 있다. 본 논문에서 사용된 Attention U-Net의 특징을 통해 손금의 Segmentation 영역을 Attention Coefficient를 업데이트하며 효율적으로 학습할 수 있음을 확인하였다.

  • PDF

Evaluation of U-Net Based Learning Models according to Equalization Algorithm in Thyroid Ultrasound Imaging (갑상선 초음파 영상의 평활화 알고리즘에 따른 U-Net 기반 학습 모델 평가)

  • Moo-Jin Jeong;Joo-Young Oh;Hoon-Hee Park;Joo-Young Lee
    • Journal of radiological science and technology
    • /
    • v.47 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • This study aims to evaluate the performance of the U-Net based learning model that may vary depending on the histogram equalization algorithm. The subject of the experiment were 17 radiology students of this college, and 1,727 data sets in which the region of interest was set in the thyroid after acquiring ultrasound image data were used. The training set consisted of 1,383 images, the validation set consisted of 172 and the test data set consisted of 172. The equalization algorithm was divided into Histogram Equalization(HE) and Contrast Limited Adaptive Histogram Equalization(CLAHE), and according to the clip limit, it was divided into CLAHE8-1, CLAHE8-2. CLAHE8-3. Deep Learning was learned through size control, histogram equalization, Z-score normalization, and data augmentation. As a result of the experiment, the Attention U-Net showed the highest performance from CLAHE8-2 to 0.8355, and the U-Net and BSU-Net showed the highest performance from CLAHE8-3 to 0.8303 and 0.8277. In the case of mIoU, the Attention U-Net was 0.7175 in CLAHE8-2, the U-Net was 0.7098 and the BSU-Net was 0.7060 in CLAHE8-3. This study attempted to confirm the effects of U-Net, Attention U-Net, and BSU-Net models when histogram equalization is performed on ultrasound images. The increase in Clip Limit can be expected to increase the ROI match with the prediction mask by clarifying the boundaries, which affects the improvement of the contrast of the thyroid area in deep learning model learning, and consequently affects the performance improvement.

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

A study on skip-connection with time-frequency self-attention for improving speech enhancement based on complex-valued spectrum (복소 스펙트럼 기반 음성 향상의 성능 향상을 위한 time-frequency self-attention 기반 skip-connection 기법 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.2
    • /
    • pp.94-101
    • /
    • 2023
  • A deep neural network composed of encoders and decoders, such as U-Net, used for speech enhancement, concatenates the encoder to the decoder through skip-connection. Skip-connection helps reconstruct the enhanced spectrum and complement the lost information. The features of the encoder and the decoder connected by the skip-connection are incompatible with each other. In this paper, for complex-valued spectrum based speech enhancement, Self-Attention (SA) method is applied to skip-connection to transform the feature of encoder to be compatible with the features of decoder. SA is a technique in which when generating an output sequence in a sequence-to-sequence tasks the weighted average of input is used to put attention on subsets of input, showing that noise can be effectively eliminated by being applied in speech enhancement. The three models using encoder and decoder features to apply SA to skip-connection are studied. As experimental results using TIMIT database, the proposed methods show improvements in all evaluation metrics compared to the Deep Complex U-Net (DCUNET) with skip-connection only.

Development of Automatic Segmentation Algorithm of Intima-media Thickness of Carotid Artery in Portable Ultrasound Image Based on Deep Learning (딥러닝 모델을 이용한 휴대용 무선 초음파 영상에서의 경동맥 내중막 두께 자동 분할 알고리즘 개발)

  • Choi, Ja-Young;Kim, Young Jae;You, Kyung Min;Jang, Albert Youngwoo;Chung, Wook-Jin;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.3
    • /
    • pp.100-106
    • /
    • 2021
  • Measuring Intima-media thickness (IMT) with ultrasound images can help early detection of coronary artery disease. As a result, numerous machine learning studies have been conducted to measure IMT. However, most of these studies require several steps of pre-treatment to extract the boundary, and some require manual intervention, so they are not suitable for on-site treatment in urgent situations. in this paper, we propose to use deep learning networks U-Net, Attention U-Net, and Pretrained U-Net to automatically segment the intima-media complex. This study also applied the HE, HS, and CLAHE preprocessing technique to wireless portable ultrasound diagnostic device images. As a result, The average dice coefficient of HE applied Models is 71% and CLAHE applied Models is 70%, while the HS applied Models have improved as 72% dice coefficient. Among them, Pretrained U-Net showed the highest performance with an average of 74%. When comparing this with the mean value of IMT measured by Conventional wired ultrasound equipment, the highest correlation coefficient value was shown in the HS applied pretrained U-Net.

Application and Evaluation of the Attention U-Net Using UAV Imagery for Corn Cultivation Field Extraction (무인기 영상 기반 옥수수 재배필지 추출을 위한 Attention U-NET 적용 및 평가)

  • Shin, Hyoung Sub;Song, Seok Ho;Lee, Dong Ho;Park, Jong Hwa
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.253-265
    • /
    • 2021
  • In this study, crop cultivation filed was extracted by using Unmanned Aerial Vehicle (UAV) imagery and deep learning models to overcome the limitations of satellite imagery and to contribute to the technological development of understanding the status of crop cultivation field. The study area was set around Chungbuk Goesan-gun Gammul-myeon Yidam-li and orthogonal images of the area were acquired by using UAV images. In addition, study data for deep learning models was collected by using Farm Map that modified by fieldwork. The Attention U-Net was used as a deep learning model to extract feature of UAV in this study. After the model learning process, the performance evaluation of the model for corn cultivation extraction was performed using non-learning data. We present the model's performance using precision, recall, and F1-score; the metrics show 0.94, 0.96, and 0.92, respectively. This study proved that the method is an effective methodology of extracting corn cultivation field, also presented the potential applicability for other crops.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.33-43
    • /
    • 2022
  • In this paper, we propose an ensemble model facilitated by multi-channel palm images with attention U-Net models and pretrained convolutional neural networks (CNNs) for establishing a contactless palm-based user identification system using conventional inexpensive camera sensors. Attention U-Net models are used to extract the areas of interest including hands (i.e., with fingers), palms (i.e., without fingers) and palm lines, which are combined to generate three channels being ped into the ensemble classifier. Then, the proposed palm information-based user identification system predicts the class using the classifier ensemble with three outperforming pre-trained CNN models. The proposed model demonstrates that the proposed model could achieve the classification accuracy, precision, recall, F1-score of 98.60%, 98.61%, 98.61%, 98.61% respectively, which indicate that the proposed model is effective even though we are using very cheap and inexpensive image sensors. We believe that in this COVID-19 pandemic circumstances, the proposed palm-based contactless user identification system can be an alternative, with high safety and reliability, compared with currently overwhelming contact-based systems.

Attention Aware Residual U-Net for Biometrics Segmentation (생체 인식 인식 시스템을 위한 주의 인식 잔차 분할)

  • Htet, Aung Si Min;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.300-302
    • /
    • 2022
  • Palm vein identification has attracted attention due to its distinct characteristics and excellent recognition accuracy. However, many contactless palm vein identification systems suffer from the issue of having low-quality palm images, resulting in degradation of recognition accuracy. This paper proposes the use of U-Net architecture to correctly segment the vascular blood vessel from palm images. Attention gate mechanism and residual block are also utilized to effectively learn the crucial features of a specific segmentation task. The experiments were conducted on CASIA dataset. Hessian-based Jerman filtering method is applied to label the palm vein patterns from the original images, then the network is trained to segment the palm vein features from the background noise. The proposed method has obtained 96.24 IoU coefficient and 98.09 dice coefficient.

A study on speech enhancement using complex-valued spectrum employing Feature map Dependent attention gate (특징 맵 중요도 기반 어텐션을 적용한 복소 스펙트럼 기반 음성 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.544-551
    • /
    • 2023
  • Speech enhancement used to improve the perceptual quality and intelligibility of noise speech has been studied as a method using a complex-valued spectrum that can improve both magnitude and phase in a method using a magnitude spectrum. In this paper, a study was conducted on how to apply attention mechanism to complex-valued spectrum-based speech enhancement systems to further improve the intelligibility and quality of noise speech. The attention is performed based on additive attention and allows the attention weight to be calculated in consideration of the complex-valued spectrum. In addition, the global average pooling was used to consider the importance of the feature map. Complex-valued spectrum-based speech enhancement was performed based on the Deep Complex U-Net (DCUNET) model, and additive attention was conducted based on the proposed method in the Attention U-Net model. The results of the experiments on noise speech in a living room environment showed that the proposed method is improved performance over the baseline model according to evaluation metrics such as Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short Time Object Intelligence (STOI), and consistently improved performance across various background noise environments and low Signal-to-Noise Ratio (SNR) conditions. Through this, the proposed speech enhancement system demonstrated its effectiveness in improving the intelligibility and quality of noisy speech.