• Title/Summary/Keyword: adversarial learning

Search Result 262, Processing Time 0.033 seconds

Assessment and Analysis of Fidelity and Diversity for GAN-based Medical Image Generative Model (GAN 기반 의료영상 생성 모델에 대한 품질 및 다양성 평가 및 분석)

  • Jang, Yoojin;Yoo, Jaejun;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.11-19
    • /
    • 2022
  • Recently, various researches on medical image generation have been suggested, and it becomes crucial to accurately evaluate the quality and diversity of the generated medical images. For this purpose, the expert's visual turing test, feature distribution visualization, and quantitative evaluation through IS and FID are evaluated. However, there are few methods for quantitatively evaluating medical images in terms of fidelity and diversity. In this paper, images are generated by learning a chest CT dataset of non-small cell lung cancer patients through DCGAN and PGGAN generative models, and the performance of the two generative models are evaluated in terms of fidelity and diversity. The performance is quantitatively evaluated through IS and FID, which are one-dimensional score-based evaluation methods, and Precision and Recall, Improved Precision and Recall, which are two-dimensional score-based evaluation methods, and the characteristics and limitations of each evaluation method are also analyzed in medical imaging.

Boundary-enhanced SAR Water Segmentation using Adversarial Learning of Deep Neural Networks (적대적 학습 개념을 도입한 경계 강화 SAR 수체탐지 딥러닝 모델)

  • Hwisong Kim;Duk-jin Kim;Junwoo Kim;Seungwoo Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.2-2
    • /
    • 2023
  • 기후변화가 가속화로 인해 수재해의 빈도와 강도 예측이 어려워짐에 따라 실시간 홍수 모니터링에 대한 수요가 증가하고 있다. 합성개구레이다는 광원과 날씨에 무관하게 촬영이 가능하여 수재해 발생시에도 영상을 확보할 수 있다. 합성개구레이다를 활용한 수체 탐지 알고리즘 개발이 활발히 연구되어 왔고, 딥러닝의 발달로 CNN을 활용하여 높은 정확도로 수체 탐지가 기능해졌다. 하지만, CNN 기반 수체 탐지 모델은 훈련시 높은 정량적 정확성 지표를 달성하여도 추론 후 정성적 평가시 경계와 소하천에 대한 탐지 정확성이 떨어진다. 홍수 모니터링에서 특히 중요한 정보인 경계와 좁은 하천에 대해서 정확성이 떨어짐에 따라 실생활 적용이 어렵다. 이에 경계를 강화한 적대적 학습 기반의 수체 탐지 모델을 개발하여 더 세밀하고 정확하게 탐지하고자 한다. 적대적 학습은 생성적 적대 신경망(GAN)의 두 개의 모델인 생성자와 판별자가 서로 관여하며 더 높은 정확도를 달성할 수 있도록 학습이다. 이러한 적대적 학습 개념을 수체 탐지 모델에 처음으로 도입하여, 생성자는 실제 라벨 데이터와 유사하게 수체 경계와 소하천까지 탐지하고자 학습한다. 반면 판별자는 경계 거리 변환 맵과 합성개구레이다 영상을 기반으로 라벨데이터와 수체 탐지 결과를 구분한다. 경계가 강화될 수 있도록, 면적과 경계를 모두 고려할 수 있는 손실함수 조합을 구성하였다. 제안 모델이 경계와 소하천을 정확히 탐지하는지 판단하기 위해, 정량적 지표로 F1-score를 사용하였으며, 육안 판독을 통해 정성적 평가도 진행하였다. 기존 U-Net 모델이 탐지하지 못하던 영역에 대해 제안한 경계 강화 적대적 수체 탐지 모델이 수체의 세밀한 부분까지 탐지할 수 있음을 증명하였다.

  • PDF

De-Identified Face Image Generation within Face Verification for Privacy Protection (프라이버시 보호를 위한 얼굴 인증이 가능한 비식별화 얼굴 이미지 생성 연구)

  • Jung-jae Lee;Hyun-sik Na;To-min Ok;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.201-210
    • /
    • 2023
  • Deep learning-based face verificattion model show high performance and are used in many fields, but there is a possibility the user's face image may be leaked in the process of inputting the face image to the model. Althoughde-identification technology exists as a method for minimizing the exposure of face features, there is a problemin that verification performance decreases when the existing technology is applied. In this paper, after combining the face features of other person, a de-identified face image is created through StyleGAN. In addition, we propose a method of optimizingthe combining ratio of features according to the face verification model using HopSkipJumpAttack. We visualize the images generated by the proposed method to check the de-identification performance, and evaluate the ability to maintain the performance of the face verification model through experiments. That is, face verification can be performed using the de-identified image generated through the proposed method, and leakage of face personal information can be prevented.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Multi Cycle Consistent Adversarial Networks for Multi Attribute Image to Image Translation

  • Jo, Seok Hee;Cho, Kyu Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.63-69
    • /
    • 2020
  • Image-image translation is a technology that creates a target image through input images, and has recently shown high performance in creating a more realistic image by utilizing GAN, which is a non-map learning structure. Therefore, there are various studies on image-to-image translation using GAN. At this point, most image-to-image translations basically target one attribute translation. But the data used and obtainable in real life consist of a variety of features that are hard to explain with one feature. Therefore, if you aim to change multiple attributes that can divide the image creation process by attributes to take advantage of the various attributes, you will be able to play a better role in image-to-image translation. In this paper, we propose Multi CycleGAN, a dual attribute transformation structure, by utilizing CycleGAN, which showed high performance among image-image translation structures using GAN. This structure implements a dual transformation structure in which three domains conduct two-way learning to learn about the two properties of an input domain. Experiments have shown that images through the new structure maintain the properties of the input area and show high performance with the target properties applied. Using this structure, it is possible to create more diverse images in the future, so we can expect to utilize image generation in more diverse areas.

A Study of CNN-based Super-Resolution Method for Remote Sensing Image (원격 탐사 영상을 활용한 CNN 기반의 초해상화 기법 연구)

  • Choi, Yeonju;Kim, Minsik;Kim, Yongwoo;Han, Sanghyuck
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.3
    • /
    • pp.449-460
    • /
    • 2020
  • Super-resolution is a technique used to reconstruct an image with low-resolution into that of high-resolution. Recently, deep-learning based super resolution has become the mainstream, and applications of these methods are widely used in the remote sensing field. In this paper, we propose a super-resolution method based on the deep back-projection network model to improve the satellite image resolution by the factor of four. In the process, we customized the loss function with the edge loss to result in a more detailed feature of the boundary of each object and to improve the stability of the model training using generative adversarial network based on Wasserstein distance loss. Also, we have applied the detail preserving image down-scaling method to enhance the naturalness of the training output. Finally, by including the modified-residual learning with a panchromatic feature in the final step of the training process. Our proposed method is able to reconstruct fine features and high frequency information. Comparing the results of our method with that of the others, we propose that the super-resolution method improves the sharpness and the clarity of WorldView-3 and KOMPSAT-2 images.

A Study on Webtoon Background Image Generation Using CartoonGAN Algorithm (CartoonGAN 알고리즘을 이용한 웹툰(Webtoon) 배경 이미지 생성에 관한 연구)

  • Saekyu Oh;Juyoung Kang
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.173-185
    • /
    • 2022
  • Nowadays, Korean webtoons are leading the global digital comic market. Webtoons are being serviced in various languages around the world, and dramas or movies produced with Webtoons' IP (Intellectual Property Rights) have become a big hit, and more and more webtoons are being visualized. However, with the success of these webtoons, the working environment of webtoon creators is emerging as an important issue. According to the 2021 Cartoon User Survey, webtoon creators spend 10.5 hours a day on creative activities on average. Creators have to draw large amount of pictures every week, and competition among webtoons is getting fiercer, and the amount of paintings that creators have to draw per episode is increasing. Therefore, this study proposes to generate webtoon background images using deep learning algorithms and use them for webtoon production. The main character in webtoon is an area that needs much of the originality of the creator, but the background picture is relatively repetitive and does not require originality, so it can be useful for webtoon production if it can create a background picture similar to the creator's drawing style. Background generation uses CycleGAN, which shows good performance in image-to-image translation, and CartoonGAN, which is specialized in the Cartoon style image generation. This deep learning-based image generation is expected to shorten the working hours of creators in an excessive work environment and contribute to the convergence of webtoons and technologies.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

Convergence of Artificial Intelligence Techniques and Domain Specific Knowledge for Generating Super-Resolution Meteorological Data (기상 자료 초해상화를 위한 인공지능 기술과 기상 전문 지식의 융합)

  • Ha, Ji-Hun;Park, Kun-Woo;Im, Hyo-Hyuk;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.63-70
    • /
    • 2021
  • Generating a super-resolution meteological data by using a high-resolution deep neural network can provide precise research and useful real-life services. We propose a new technique of generating improved training data for super-resolution deep neural networks. To generate high-resolution meteorological data with domain specific knowledge, Lambert conformal conic projection and objective analysis were applied based on observation data and ERA5 reanalysis field data of specialized institutions. As a result, temperature and humidity analysis data based on domain specific knowledge showed improved RMSE by up to 42% and 46%, respectively. Next, a super-resolution generative adversarial network (SRGAN) which is one of the aritifial intelligence techniques was used to automate the manual data generation technique using damain specific techniques as described above. Experiments were conducted to generate high-resolution data with 1 km resolution from global model data with 10 km resolution. Finally, the results generated with SRGAN have a higher resoltuion than the global model input data, and showed a similar analysis pattern to the manually generated high-resolution analysis data, but also showed a smooth boundary.