• 제목/요약/키워드: Paired dataset

검색결과 24건 처리시간 0.02초

짝지어진 데이터셋을 이용한 분할-정복 U-net 기반 고화질 초음파 영상 복원 (A Divide-Conquer U-Net Based High-Quality Ultrasound Image Reconstruction Using Paired Dataset)

  • 유민하;안치영
    • 대한의용생체공학회:의공학회지
    • /
    • 제45권3호
    • /
    • pp.118-127
    • /
    • 2024
  • Commonly deep learning methods for enhancing the quality of medical images use unpaired dataset due to the impracticality of acquiring paired dataset through commercial imaging system. In this paper, we propose a supervised learning method to enhance the quality of ultrasound images. The U-net model is designed by incorporating a divide-and-conquer approach that divides and processes an image into four parts to overcome data shortage and shorten the learning time. The proposed model is trained using paired dataset consisting of 828 pairs of low-quality and high-quality images with a resolution of 512x512 pixels obtained by varying the number of channels for the same subject. Out of a total of 828 pairs of images, 684 pairs are used as the training dataset, while the remaining 144 pairs served as the test dataset. In the test results, the average Mean Squared Error (MSE) was reduced from 87.6884 in the low-quality images to 45.5108 in the restored images. Additionally, the average Peak Signal-to-Noise Ratio (PSNR) was improved from 28.7550 to 31.8063, and the average Structural Similarity Index (SSIM) was increased from 0.4755 to 0.8511, demonstrating significant enhancements in image quality.

Document Image Binarization by GAN with Unpaired Data Training

  • Dang, Quang-Vinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제16권2호
    • /
    • pp.8-18
    • /
    • 2020
  • Data is critical in deep learning but the scarcity of data often occurs in research, especially in the preparation of the paired training data. In this paper, document image binarization with unpaired data is studied by introducing adversarial learning, excluding the need for supervised or labeled datasets. However, the simple extension of the previous unpaired training to binarization inevitably leads to poor performance compared to paired data training. Thus, a new deep learning approach is proposed by introducing a multi-diversity of higher quality generated images. In this paper, a two-stage model is proposed that comprises the generative adversarial network (GAN) followed by the U-net network. In the first stage, the GAN uses the unpaired image data to create paired image data. With the second stage, the generated paired image data are passed through the U-net network for binarization. Thus, the trained U-net becomes the binarization model during the testing. The proposed model has been evaluated over the publicly available DIBCO dataset and it outperforms other techniques on unpaired training data. The paper shows the potential of using unpaired data for binarization, for the first time in the literature, which can be further improved to replace paired data training for binarization in the future.

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Canonical Correlation Biplot

  • Park, Mi-Ra;Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • 제3권1호
    • /
    • pp.11-19
    • /
    • 1996
  • Canonical correlation analysis is a multivariate technique for identifying and quantifying the statistical relationship between two sets of variables. Like most multivariate techniques, the main objective of canonical correlation analysis is to reduce the dimensionality of the dataset. It would be particularly useful if high dimensional data can be represented in a low dimensional space. In this study, we will construct statistical graphs for paired sets of multivariate data. Specifically, plots of the observations as well as the variables are proposed. We discuss the geometric interpretation and goodness-of-fit of the proposed plots. We also provide a numerical example.

  • PDF

Noise2Atom: unsupervised denoising for scanning transmission electron microscopy images

  • Feng Wang;Trond R. Henninen;Debora Keller;Rolf Erni
    • Applied Microscopy
    • /
    • 제50권
    • /
    • pp.23.1-23.9
    • /
    • 2020
  • We propose an effective deep learning model to denoise scanning transmission electron microscopy (STEM) image series, named Noise2Atom, to map images from a source domain 𝓢 to a target domain 𝓒, where 𝓢 is for our noisy experimental dataset, and 𝓒 is for the desired clear atomic images. Noise2Atom uses two external networks to apply additional constraints from the domain knowledge. This model requires no signal prior, no noise model estimation, and no paired training images. The only assumption is that the inputs are acquired with identical experimental configurations. To evaluate the restoration performance of our model, as it is impossible to obtain ground truth for our experimental dataset, we propose consecutive structural similarity (CSS) for image quality assessment, based on the fact that the structures remain much the same as the previous frame(s) within small scan intervals. We demonstrate the superiority of our model by providing evaluation in terms of CSS and visual quality on different experimental datasets.

적대적생성신경망을 이용한 연안 파랑 비디오 영상에서의 빗방울 제거 및 배경 정보 복원 (Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks)

  • 허동;김재일;김진아
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제25권5호
    • /
    • pp.1-9
    • /
    • 2019
  • 본 논문에서는 강우시 빗방울로 인해 왜곡된 연안 파랑 비디오 영상에서 빗방울 제거와 제거된 영역에 대한 배경 정보를 복원하기 위한 적대적생성신경망을 이용한 영상 강화 방법을 제안하고자 한다. 영상 변환에 널리 사용되는 Pix2Pix 네트워크와 현재 단일 이미지에 대한 빗방울 제거에 좋은 성능을 보여주고 있는 Attentive GAN을 실험 대상 모델로 구현하고, 빗방울 제거를 위한 공개 데이터 셋을 이용하여 두 모델을 학습한 후 빗방울 왜곡 연안 파랑 영상의 빗방울 제거 및 배경 정보 복원 성능을 평가하였다. 연안 파랑 비디오에 영상에 대한 빗방울 왜곡 보정 성능을 향상시키기 위해 실제 연안에서 빗방울 유무가 짝을 이룬 데이터 셋을 직접 획득한 후 사전 학습된 모델에 대하여 전이 학습에 사용하여 빗방울 왜곡 보정에 대한 성능 향상을 확인하였다. 모델의 성능은 빗방울 왜곡 영상으로부터 파랑 정보 복원 성능을 최대 신호 대 잡음비와 구조적 유사도를 이용하여 평가하였으며, 전이 학습을 통해 파인 튜닝된 Pix2Pix 모델이 연안 파랑 비디오 영상의 빗방울 왜곡에 대한 가장 우수한 복원 성능을 보였다.

A Kolmogorov-Smirnov-Type Test for Independence of Bivariate Failure Time Data Under Independent Censoring

  • Kim, Jingeum
    • Journal of the Korean Statistical Society
    • /
    • 제28권4호
    • /
    • pp.469-478
    • /
    • 1999
  • We propose a Kolmogorov-Smirnov-type test for independence of paired failure times in the presence of independent censoring times. This independent censoring mechanism is often assumed in case-control studies. To do this end, we first introduce a process defined as the difference between the bivariate survival function estimator proposed by Wang and Wells (1997) and the product of the product-limit estimators (Kaplan and Meier (1958)) for the marginal survival functions. Then, we derive its asymptotic properties under the null hypothesis of independence. Finally, we assess the performance of the proposed test by simulations, and illustrate the proposed methodology with a dataset for remission times of 21 pairs of leukemia patients taken from Oakes(1982).

  • PDF

표적 SAR 시뮬레이션 영상을 이용한 식별 성능 분석 (Performance Analysis of Automatic Target Recognition Using Simulated SAR Image)

  • 이수미;이윤경;김상완
    • 대한원격탐사학회지
    • /
    • 제38권3호
    • /
    • pp.283-298
    • /
    • 2022
  • Synthetic Aperture Radar (SAR)영상은 날씨와 주야에 관계없이 취득될 수 있어 감시, 정찰 및 국토안보 등의 목적을 위한 자동표적인식(Automatic Target Recognition, ATR)에 활용 가능성이 높다. 그러나, 식별 시스템 개발을 위해 다양하고 방대한 양의 시험영상을 구축하는 것은 비용, 운용측면에서 한계가 있다. 최근 표적 모델을 이용하여 시뮬레이션된 SAR 영상에 기반한 표적 식별 시스템 개발에 대한 관심이 높아지고 있다. SAR-ATR 분야에서 대표적으로 이용되는 산란점 매칭과 템플릿 매칭 기반 알고리즘을 적용하여 표적식별을 수행하였다. 먼저 산란점 매칭 기반의 식별은 점을 World View Vector (WVV)로 재구성 후 Weighted Bipartite Graph Matching (WBGM)을 수행하였고, 템플릿 매칭을 통한 식별은 서로 인접한 산란점으로 재구성한 두 영상간의 상관계수를 사용하였다. 개발한 두 알고리즘의 식별성능시험을 위해 최근 미국 Defense Advanced Research Projects Agency (DARPA)에서 배포한 표적 시뮬레이션 영상인 Synthetic and Measured Paired Labeled Experiment (SAMPLE) 자료를 사용하였다. 표준 환경, 표적의 부분 폐색, 랜덤 폐색 정도에 따른 알고리즘 성능을 분석하였다. 산란점 매칭 알고리즘의 식별 성능이 템플릿 매칭보다 전반적으로 우수하였다. 10개 표적을 대상으로 표준환경에서의 산란점 매칭기반 평균 식별률은 85.1%, 템플릿 매칭기반은 74.4%이며, 표적별 식별성능 편차 또한 산란점 매칭기법이 템플릿 매칭기법보다 작았다. 표적의 부분 폐색정도에 따른 성능은 산란점 매칭기반 알고리즘이 템플릿 매칭보다 약 10% 높고, 표적의 랜덤 폐색 60% 발생에도 식별률이 73.4% 정도로 비교적 높은 식별성능을 보였다.

사후전산화단층촬영의 법의병리학 분야 활용을 위한 조건부 적대적 생성 신경망을 이용한 CT 영상의 해상도 개선: 팬텀 연구 (Enhancing CT Image Quality Using Conditional Generative Adversarial Networks for Applying Post-mortem Computed Tomography in Forensic Pathology: A Phantom Study)

  • 윤예빈;허진행;김예지;조혜진;윤용수
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제46권4호
    • /
    • pp.315-323
    • /
    • 2023
  • Post-mortem computed tomography (PMCT) is commonly employed in the field of forensic pathology. PMCT was mainly performed using a whole-body scan with a wide field of view (FOV), which lead to a decrease in spatial resolution due to the increased pixel size. This study aims to evaluate the potential for developing a super-resolution model based on conditional generative adversarial networks (CGAN) to enhance the image quality of CT. 1761 low-resolution images were obtained using a whole-body scan with a wide FOV of the head phantom, and 341 high-resolution images were obtained using the appropriate FOV for the head phantom. Of the 150 paired images in the total dataset, which were divided into training set (96 paired images) and validation set (54 paired images). Data augmentation was perform to improve the effectiveness of training by implementing rotations and flips. To evaluate the performance of the proposed model, we used the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Deep Image Structure and Texture Similarity (DISTS). Obtained the PSNR, SSIM, and DISTS values of the entire image and the Medial orbital wall, the zygomatic arch, and the temporal bone, where fractures often occur during head trauma. The proposed method demonstrated improvements in values of PSNR by 13.14%, SSIM by 13.10% and DISTS by 45.45% when compared to low-resolution images. The image quality of the three areas where fractures commonly occur during head trauma has also improved compared to low-resolution images.

Generation of Whole-Genome Sequencing Data for Comparing Primary and Castration-Resistant Prostate Cancer

  • Park, Jong-Lyul;Kim, Seon-Kyu;Kim, Jeong-Hwan;Yun, Seok Joong;Kim, Wun-Jae;Kim, Won Tae;Jeong, Pildu;Kang, Ho Won;Kim, Seon-Young
    • Genomics & Informatics
    • /
    • 제16권3호
    • /
    • pp.71-74
    • /
    • 2018
  • Because castration-resistant prostate cancer (CRPC) does not respond to androgen deprivation therapy and has a very poor prognosis, it is critical to identify a prognostic indicator for predicting high-risk patients who will develop CRPC. Here, we report a dataset of whole genomes from four pairs of primary prostate cancer (PC) and CRPC samples. The analysis of the paired PC and CRPC samples in the whole-genome data showed that the average number of somatic mutations per patients was 7,927 in CRPC tissues compared with primary PC tissues (range, 1,691 to 21,705). Our whole-genome sequencing data of primary PC and CRPC may be useful for understanding the genomic changes and molecular mechanisms that occur during the progression from PC to CRPC.