• Title/Summary/Keyword: Generative adversarial network (GAN)

Search Result 176, Processing Time 0.029 seconds

Generative Adversarial Network based CNN model for artifact reduction on HEVC-encoded video (HEVC 비디오 영상 압축 왜곡 제거를 위한 Generative Adversarial Network 적용 기법)

  • Jeon, Jin;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.192-193
    • /
    • 2017
  • 본 논문에서는 비디오 영상 압축 왜곡 제거를 위해 Generative Adversarial Network (GAN)을 적용한 컨벌루션 뉴럴 네트워크 (CNN) 모델을 제안한다. GAN 모델의 생성 모델 (Generator)은 노이즈가 아닌 High Efficiency Video Coding (HEVC)로 압축된 영상을 입력 받은 뒤, 압축 왜곡이 제거된 영상을 출력하며, 분류 모델 (Discriminator)은 원본 영상과 압축된 영상을 입력 받은 뒤, 원본 영상과 압축 왜곡이 포함된 압축된 영상을 분류한다. 분류 모델은 5 개 층을 쌓은 컨벌루션 뉴럴 네트워크 구조를 사용하였고, 생성 모델은 5 개 층을 쌓은 SRCNN 구조와 VDSR 구조를 기반으로 한 두 개의 모델을 이용한 실험을 통해 얻은 결과를 비교하였다. 비디오 영상 압축 왜곡 제거 실험을 위해 원본 비디오 영상을 HEVC 을 이용하여 2Mbps, 4Mbps 로 압축된 영상을 사용하였으며, 압축된 영상 대비 왜곡이 제거된 영상을 얻을 수 있었다.

  • PDF

Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network

  • Park, YeongHyeon;Park, Won Seok;Kim, Yeong Beom
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.511-523
    • /
    • 2021
  • The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.

3D Point Cloud Enhancement based on Generative Adversarial Network (생성적 적대 신경망 기반 3차원 포인트 클라우드 향상 기법)

  • Moon, HyungDo;Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1452-1455
    • /
    • 2021
  • Recently, point clouds are generated by capturing real space in 3D, and it is actively applied and serviced for performances, exhibitions, education, and training. These point cloud data require post-correction work to be used in virtual environments due to errors caused by the capture environment with sensors and cameras. In this paper, we propose an enhancement technique for 3D point cloud data by applying generative adversarial network(GAN). Thus, we performed an approach to regenerate point clouds as an input of GAN. Through our method presented in this paper, point clouds with a lot of noise is configured in the same shape as the real object and environment, enabling precise interaction with the reconstructed content.

Automatic Generation of Training Corpus for a Sentiment Analysis Using a Generative Adversarial Network (생성적 적대 네트워크를 이용한 감성인식 학습데이터 자동 생성)

  • Park, Cheon-Young;Choi, Yong-Seok;Lee, Kong Joo
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.389-393
    • /
    • 2018
  • 딥러닝의 발달로 기계번역, 대화 시스템 등의 자연언어처리 분야가 크게 발전하였다. 딥러닝 모델의 성능을 향상시키기 위해서는 많은 데이터가 필요하다. 그러나 많은 데이터를 수집하기 위해서는 많은 시간과 노력이 소요된다. 본 연구에서는 이미지 생성 모델로 좋은 성능을 보이고 있는 생성적 적대 네트워크(Generative adverasarial network)를 문장 생성에 적용해본다. 본 연구에서는 긍/부정 조건에 따른 문장을 자동 생성하기 위해 SeqGAN 모델을 수정하여 사용한다. 그리고 분류기를 포함한 SeqGAN이 긍/부정 감성인식 학습데이터를 자동 생성할 수 있는지 실험한다. 실험을 수행한 결과, 분류기를 포함한 SeqGAN 모델이 생성한 문장과 학습데이터를 혼용하여 학습할 경우 실제 학습데이터만 학습 시킨 경우보다 좋은 정확도를 보였다.

  • PDF

Constrained adversarial loss for generative adversarial network-based faithful image restoration

  • Kim, Dong-Wook;Chung, Jae-Ryun;Kim, Jongho;Lee, Dae Yeol;Jeong, Se Yoon;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.415-425
    • /
    • 2019
  • Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super-resolution, and compression artifact reduction. By fully exploiting its characteristics, state-of-the-art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN-training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non-negligible degradation of the objective image quality, including peak signal-to-noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression-artifact reduction and super-resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.

A Broken Image Screening Method based on Histogram Analysis to Improve GAN Algorithm (GAN 알고리즘 개선을 위한 히스토그램 분석 기반 파손 영상 선별 방법)

  • Cho, Jin-Hwan;Jang, Jongwook;Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.591-597
    • /
    • 2022
  • Recently, many studies have been done on the data augmentation technique as a way to efficiently build datasets. Among them, a representative data augmentation technique is a method of utilizing Generative Adversarial Network (GAN), which generates data similar to real data by competitively learning generators and discriminators. However, when learning GAN, there are cases where a broken pixel image occurs among similar data generated according to the environment and progress, which cannot be used as a dataset and causes an increase in learning time. In this paper, an algorithm was developed to select these damaged images by analyzing the histogram of image data generated during the GAN learning process, and as a result of comparing them with the images generated in the existing GAN, the ratio of the damaged images was reduced by 33.3 times(3,330%).

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

A Study on the Complementary Method of Aerial Image Learning Dataset Using Cycle Generative Adversarial Network (CycleGAN을 활용한 항공영상 학습 데이터 셋 보완 기법에 관한 연구)

  • Choi, Hyeoung Wook;Lee, Seung Hyeon;Kim, Hyeong Hun;Suh, Yong Cheol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.499-509
    • /
    • 2020
  • This study explores how to build object classification learning data based on artificial intelligence. The data has been investigated recently in image classification fields and, in turn, has a great potential to use. In order to recognize and extract relatively accurate objects using artificial intelligence, a large amount of learning data is required to be used in artificial intelligence algorithms. However, currently, there are not enough datasets for object recognition learning to share and utilize. In addition, generating data requires long hours of work, high expenses and labor. Therefore, in the present study, a small amount of initial aerial image learning data was used in the GAN (Generative Adversarial Network)-based generator network in order to establish image learning data. Moreover, the experiment also evaluated its quality in order to utilize additional learning datasets. The method of oversampling learning data using GAN can complement the amount of learning data, which have a crucial influence on deep learning data. As a result, this method is expected to be effective particularly with insufficient initial datasets.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.