• Title/Summary/Keyword: UNET

Search Result 57, Processing Time 0.022 seconds

Assessing Techniques for Advancing Land Cover Classification Accuracy through CNN and Transformer Model Integration (CNN 모델과 Transformer 조합을 통한 토지피복 분류 정확도 개선방안 검토)

  • Woo-Dam SIM;Jung-Soo LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.1
    • /
    • pp.115-127
    • /
    • 2024
  • This research aimed to construct models with various structures based on the Transformer module and to perform land cover classification, thereby examining the applicability of the Transformer module. For the classification of land cover, the Unet model, which has a CNN structure, was selected as the base model, and a total of four deep learning models were constructed by combining both the encoder and decoder parts with the Transformer module. During the training process of the deep learning models, the training was repeated 10 times under the same conditions to evaluate the generalization performance. The evaluation of the classification accuracy of the deep learning models showed that the Model D, which utilized the Transformer module in both the encoder and decoder structures, achieved the highest overall accuracy with an average of approximately 89.4% and a Kappa coefficient average of about 73.2%. In terms of training time, models based on CNN were the most efficient. however, the use of Transformer-based models resulted in an average improvement of 0.5% in classification accuracy based on the Kappa coefficient. It is considered necessary to refine the model by considering various variables such as adjusting hyperparameters and image patch sizes during the integration process with CNN models. A common issue identified in all models during the land cover classification process was the difficulty in detecting small-scale objects. To improve this misclassification phenomenon, it is deemed necessary to explore the use of high-resolution input data and integrate multidimensional data that includes terrain and texture information.

Reconstructing the cosmic density field based on the generative adversarial network.

  • Shi, Feng
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.50.1-50.1
    • /
    • 2020
  • In this topic, I will introduce a recent work on reconstructing the cosmic density field based on the GAN. I will show the performance of the GAN compared to the traditional Unet architecture. I'd also like to discuss a 3-channels-based 2D datasets for the training to recover the 3D density field. Finally, I will present some performance tests based on the test datasets.

  • PDF

A New Hyper Parameter of Hounsfield Unit Range in Liver Segmentation

  • Kim, Kangjik;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.103-111
    • /
    • 2020
  • Liver cancer is the most fatal cancer that occurs worldwide. In order to diagnose liver cancer, the patient's physical condition was checked by using a CT technique using radiation. Segmentation was needed to diagnose the liver on the patient's abdominal CT scan, which the radiologists had to do manually, which caused tremendous time and human mistakes. In order to automate, researchers attempted segmentation using image segmentation algorithms in computer vision field, but it was still time-consuming because of the interactive based and the setting value. To reduce time and to get more accurate segmentation, researchers have begun to attempt to segment the liver in CT images using CNNs, which show significant performance in various computer vision fields. The pixel value, or numerical value, of the CT image is called the Hounsfield Unit (HU) value, which is a relative representation of the transmittance of radiation, and usually ranges from about -2000 to 2000. In general, deep learning researchers reduce or limit this range and use it for training to remove noise and focus on the target organ. Here, we observed that the range of HU values was limited in many studies but different in various liver segmentation studies, and assumed that performance could vary depending on the HU range. In this paper, we propose the possibility of considering HU value range as a hyper parameter. U-Net and ResUNet were used to compare and experiment with different HU range limit preprocessing of CHAOS dataset under limited conditions. As a result, it was confirmed that the results are different depending on the HU range. This proves that the range limiting the HU value itself can be a hyper parameter, which means that there are HU ranges that can provide optimal performance for various models.

Prerequisite Research for the Development of an End-to-End System for Automatic Tooth Segmentation: A Deep Learning-Based Reference Point Setting Algorithm (자동 치아 분할용 종단 간 시스템 개발을 위한 선결 연구: 딥러닝 기반 기준점 설정 알고리즘)

  • Kyungdeok Seo;Sena Lee;Yongkyu Jin;Sejung Yang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.346-353
    • /
    • 2023
  • In this paper, we propose an innovative approach that leverages deep learning to find optimal reference points for achieving precise tooth segmentation in three-dimensional tooth point cloud data. A dataset consisting of 350 aligned maxillary and mandibular cloud data was used as input, and both end coordinates of individual teeth were used as correct answers. A two-dimensional image was created by projecting the rendered point cloud data along the Z-axis, where an image of individual teeth was created using an object detection algorithm. The proposed algorithm is designed by adding various modules to the Unet model that allow effective learning of a narrow range, and detects both end points of the tooth using the generated tooth image. In the evaluation using DSC, Euclid distance, and MAE as indicators, we achieved superior performance compared to other Unet-based models. In future research, we will develop an algorithm to find the reference point of the point cloud by back-projecting the reference point detected in the image in three dimensions, and based on this, we will develop an algorithm to divide the teeth individually in the point cloud through image processing techniques.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Luma Noise Reduction using Deep Learning Network in Video Codec (Deep Learning Network를 이용한 Video Codec에서 휘도성분 노이즈 제거)

  • Kim, Yang-Woo;Lee, Yung-Lyul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.272-273
    • /
    • 2019
  • VVC(Versatile Video Coding)는 YUV 입력 영상에 대하여 Luma 성분과 Chroma 성분에 대하여 각각 다른 최적의 방법으로 블록분할 후 해당 블록에 대해서 화면 내 예측 또는 화면 간 예측을 수행하고, 예측영상과 원본영상의 차이를 변환, 양자화하여 압축한다. 이 과정에서 복원영상에는 블록화 노이즈, 링잉 노이즈, 블러링 노이즈 발생한다. 본 논문에서는 인코더에서 원본영상과 복원영상의 잔차신호에 대한 MAE(Mean Absolute Error)를 추가정보로 전송하여 이 추가정보와 복원영상을 이용하여 Deep Learning 기반의 신경망 네트워크로 영상의 품질을 높이는 방법을 제안한다. 복원영상의 노이즈를 감소시키기 위하여 영상을 $32{\times}32$블록의 임의로 분할하고, DenseNet기반의 UNet 구조로 네트워크를 구성하였다.

  • PDF

Neonatal Respiratory Distress Syndrome Diagnosis Method Based on X-ray Images Using Semantic Segmentation (의미론적 분할을 이용한 X-ray 영상 기반 신생아 호흡곤란 증후군 진단 기법)

  • Jang, Eojin;Cho, Hanyong;You, Sunkyoung;Gang, Mi Hyeon;Jang, Haneol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.539-542
    • /
    • 2022
  • 신생아 호흡곤란 증후군은 주로 미숙아에게 발생하는 호흡기 질환으로, 특징적 영상 소견 및 다른 검사 소견을 바탕으로 진단된다. 본 논문은 기계 장치 등 외부 요소의 영향을 최소화하고자 폐 영역을 분할하여 신생아 호흡곤란 증후군을 진단하는 기법을 제안한다. 분할에는 UNet 구조를 사용하고 진단에는 EfficientNet-B5를 사용하여 최종적으로 신생아 호흡곤란 증후군의 진단 정확도 0.852를 달성하였다.

Wavelet Mix Module: Preserving High-Frequency in Network using Wavelet Transform (웨이블릿 혼합 모듈: 웨이블릿 변환을 이용한 네트워크 내 고주파 성분 보존)

  • Kim, Min Woo;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.231-234
    • /
    • 2022
  • 본 논문에서는 '스케치로부터 RGB 이미지로의 변환'을 수행하는 웨이블릿 기반의 네트웍에서 생성된 이미지 품질을 높이기 위해, 네트워크가 저주파수에 편향되어 학습이 되는 것을 완화하고자 Wavelet Mix Module(WMM)을 제안하였다. WMM 은 UNet 구조의 skip-connection 과정에 적용되며, 웨이블릿 변환을 사용하여 인코더 특성으로부터 세부값을 추출하여 디코더 특성으로 전달함으로써 네트워크 내에서 고주파 성분이 보존되도록 한다. WMM 이 적용된 네트워크로부터 생성된 이미지는 정량적 및 정성적인 결과가 개선됨을 실험을 통해 확인하였다.

  • PDF

Matter Density Distribution Reconstruction of Local Universe with Deep Learning

  • Hong, Sungwook E.;Kim, Juhan;Jeong, Donghui;Hwang, Ho Seong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.53.4-53.4
    • /
    • 2019
  • We reconstruct the underlying dark matter (DM) density distribution of the local universe within 20Mpc/h cubic box by using the galaxy position and peculiar velocity. About 1,000 subboxes in the Illustris-TNG cosmological simulation are used to train the relation between DM density distribution and galaxy properties by using UNet-like convolutional neural network (CNN). The estimated DM density distributions have a good agreement with their truth values in terms of pixel-to-pixel correlation, the probability distribution of DM density, and matter power spectrum. We apply the trained CNN architecture to the galaxy properties from the Cosmicflows-3 catalogue to reconstruct the DM density distribution of the local universe. The reconstructed DM density distribution can be used to understand the evolution and fate of our local environment.

  • PDF