• Title/Summary/Keyword: generative adversarial networks

Search Result 164, Processing Time 0.033 seconds

Imbalanced sample fault diagnosis method for rotating machinery in nuclear power plants based on deep convolutional conditional generative adversarial network

  • Zhichao Wang;Hong Xia;Jiyu Zhang;Bo Yang;Wenzhe Yin
    • Nuclear Engineering and Technology
    • /
    • v.55 no.6
    • /
    • pp.2096-2106
    • /
    • 2023
  • Rotating machinery is widely applied in important equipment of nuclear power plants (NPPs), such as pumps and valves. The research on intelligent fault diagnosis of rotating machinery is crucial to ensure the safe operation of related equipment in NPPs. However, in practical applications, data-driven fault diagnosis faces the problem of small and imbalanced samples, resulting in low model training efficiency and poor generalization performance. Therefore, a deep convolutional conditional generative adversarial network (DCCGAN) is constructed to mitigate the impact of imbalanced samples on fault diagnosis. First, a conditional generative adversarial model is designed based on convolutional neural networks to effectively augment imbalanced samples. The original sample features can be effectively extracted by the model based on conditional generative adversarial strategy and appropriate number of filters. In addition, high-quality generated samples are ensured through the visualization of model training process and samples features. Then, a deep convolutional neural network (DCNN) is designed to extract features of mixed samples and implement intelligent fault diagnosis. Finally, based on multi-fault experimental data of motor and bearing, the performance of DCCGAN model for data augmentation and intelligent fault diagnosis is verified. The proposed method effectively alleviates the problem of imbalanced samples, and shows its application value in intelligent fault diagnosis of actual NPPs.

Automaitc Generation of Fashion Image Dataset by Using Progressive Growing GAN (PG-GAN을 이용한 패션이미지 데이터 자동 생성)

  • Kim, Yanghee;Lee, Chanhee;Whang, Taesun;Kim, Gyeongmin;Lim, Heuiseok
    • Journal of Internet of Things and Convergence
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2018
  • Techniques for generating new sample data from higher dimensional data such as images have been utilized variously for speech synthesis, image conversion and image restoration. This paper adopts Progressive Growing of Generative Adversarial Networks(PG-GANs) as an implementation model to generate high-resolution images and to enhance variation of the generated images, and applied it to fashion image data. PG-GANs allows the generator and discriminator to progressively learn at the same time, continuously adding new layers from low-resolution images to result high-resolution images. We also proposed a Mini-batch Discrimination method to increase the diversity of generated data, and proposed a Sliced Wasserstein Distance(SWD) evaluation method instead of the existing MS-SSIM to evaluate the GAN model.

Radar-based rainfall prediction using generative adversarial network (적대적 생성 신경망을 이용한 레이더 기반 초단시간 강우예측)

  • Yoon, Seongsim;Shin, Hongjoon;Heo, Jae-Yeong
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.8
    • /
    • pp.471-484
    • /
    • 2023
  • Deep learning models based on generative adversarial neural networks are specialized in generating new information based on learned information. The deep generative models (DGMR) model developed by Google DeepMind is an generative adversarial neural network model that generates predictive radar images by learning complex patterns and relationships in large-scale radar image data. In this study, the DGMR model was trained using radar rainfall observation data from the Ministry of Environment, and rainfall prediction was performed using an generative adversarial neural network for a heavy rainfall case in August 2021, and the accuracy was compared with existing prediction techniques. The DGMR generally resembled the observed rainfall in terms of rainfall distribution in the first 60 minutes, but tended to predict a continuous development of rainfall in cases where strong rainfall occurred over the entire area. Statistical evaluation also showed that the DGMR method is an effective rainfall prediction method compared to other methods, with a critical success index of 0.57 to 0.79 and a mean absolute error of 0.57 to 1.36 mm in 1 hour advance prediction. However, the lack of diversity in the generated results sometimes reduces the prediction accuracy, so it is necessary to improve the diversity and to supplement it with rainfall data predicted by a physics-based numerical forecast model to improve the accuracy of the forecast for more than 2 hours in advance.

An acoustic Doppler-based silent speech interface technology using generative adversarial networks (생성적 적대 신경망을 이용한 음향 도플러 기반 무 음성 대화기술)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.161-168
    • /
    • 2021
  • In this paper, a Silent Speech Interface (SSI) technology was proposed in which Doppler frequency shifts of the reflected signal were used to synthesize the speech signals when 40kHz ultrasonic signal was incident to speaker's mouth region. In SSI, the mapping rules from the features derived from non-speech signals to those from audible speech signals was constructed, the speech signals are synthesized from non-speech signals using the constructed mapping rules. The mapping rules were built by minimizing the overall errors between the estimated and true speech parameters in the conventional SSI methods. In the present study, the mapping rules were constructed so that the distribution of the estimated parameters is similar to that of the true parameters by using Generative Adversarial Networks (GAN). The experimental result using 60 Korean words showed that, both objectively and subjectively, the performance of the proposed method was superior to that of the conventional neural networks-based methods.

Perceptual Photo Enhancement with Generative Adversarial Networks (GAN 신경망을 통한 자각적 사진 향상)

  • Que, Yue;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.

Face Recognition Research Based on Multi-Layers Residual Unit CNN Model

  • Zhang, Ruyang;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.11
    • /
    • pp.1582-1590
    • /
    • 2022
  • Due to the situation of the widespread of the coronavirus, which causes the problem of lack of face image data occluded by masks at recent time, in order to solve the related problems, this paper proposes a method to generate face images with masks using a combination of generative adversarial networks and spatial transformation networks based on CNN model. The system we proposed in this paper is based on the GAN, combined with multi-scale convolution kernels to extract features at different details of the human face images, and used Wasserstein divergence as the measure of the distance between real samples and synthetic samples in order to optimize Generator performance. Experiments show that the proposed method can effectively put masks on face images with high efficiency and fast reaction time and the synthesized human face images are pretty natural and real.

Spot The Difference Generation System Using Generative Adversarial Networks (생성적 적대 신경망을 활용한 다른 그림 찾기 생성 시스템)

  • Song, Seongheon;Moon, Mikyeong;Choi, Bongjun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.673-674
    • /
    • 2021
  • 본 논문은 집중력 향상 놀이인 다른 그림 찾기를 자신이 좋아하는 주제를 배경으로 쉽게 생성할 수 있는 시스템을 제안한다. 아동기에 주로 진단이 되고 성인기까지 이어질 수 있는 주의력 결핍 과다활동 증후군(ADHD)을 조기에 예방하기 위해 본 논문에서는 선택한 그림의 일부분을 가지고 생성적 적대 신경망을 활용하여 새로운 물체를 생성해 낸 뒤 자연스럽게 원본 그림에 융화될 수 있도록 하는 것이 목표이다. 하나의 다른 그림 찾기 콘텐츠를 만드는 것은 포토샵과 같이 전문성을 가진 툴을 전문가가 오랜 시간 작업해야 하는 내용이다. 전문적인 기술이 필요한 작업 과정을 본 연구를 통해 일반인도 쉽게 작업할 수 있도록 하는 것을 최종 목표로 한다.

  • PDF

Game Character Image Generation Using GAN (GAN을 이용한 게임 캐릭터 이미지 생성)

  • Jeoung-Gi Kim;Myoung-Jun Jung;Kyung-Ae Cha
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.241-248
    • /
    • 2023
  • GAN (Generative Adversarial Networks) creates highly sophisticated counterfeit products by learning real images or text and inferring commonalities. Therefore, it can be useful in fields that require the creation of large-scale images or graphics. In this paper, we implement GAN-based game character creation AI that can dramatically reduce illustration design work costs by providing expansion and automation of game character image creation. This is very efficient in game development as it allows mass production of various character images at low cost.

Flaw Detection in LCD Manufacturing Using GAN-based Data Augmentation

  • Jingyi Li;Yan Li;Zuyu Zhang;Byeongseok Shin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.124-125
    • /
    • 2023
  • Defect detection during liquid crystal display (LCD) manufacturing has always been a critical challenge. This study aims to address this issue by proposing a data augmentation method based on generative adversarial networks (GAN) to improve defect identification accuracy in LCD production. By leveraging synthetically generated image data from GAN, we effectively augment the original dataset to make it more representative and diverse. This data augmentation strategy enhances the model's generalization capability and robustness on real-world data. Compared to traditional data augmentation techniques, the synthetic data from GAN are more realistic, diverse and broadly distributed. Experimental results demonstrate that training models with GAN-generated data combined with the original dataset significantly improves the detection accuracy of critical defects in LCD manufacturing, compared to using the original dataset alone. This study provides an effective data augmentation approach for intelligent quality control in LCD production.

Few-Shot Image Synthesis using Noise-Based Deep Conditional Generative Adversarial Nets

  • Msiska, Finlyson Mwadambo;Hassan, Ammar Ul;Choi, Jaeyoung;Yoo, Jaewon
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.79-87
    • /
    • 2021
  • In recent years research on automatic font generation with machine learning mainly focus on using transformation-based methods, in comparison, generative model-based methods of font generation have received less attention. Transformation-based methods learn a mapping of the transformations from an existing input to a target. This makes them ambiguous because in some cases a single input reference may correspond to multiple possible outputs. In this work, we focus on font generation using the generative model-based methods which learn the buildup of the characters from noise-to-image. We propose a novel way to train a conditional generative deep neural model so that we can achieve font style control on the generated font images. Our research demonstrates how to generate new font images conditioned on both character class labels and character style labels when using the generative model-based methods. We achieve this by introducing a modified generator network which is given inputs noise, character class, and style, which help us to calculate losses separately for the character class labels and character style labels. We show that adding the character style vector on top of the character class vector separately gives the model rich information about the font and enables us to explicitly specify not only the character class but also the character style that we want the model to generate.