• 제목/요약/키워드: Generative Adversarial Networks

검색결과 157건 처리시간 0.026초

생성적 적대 신경망을 이용한 음향 도플러 기반 무 음성 대화기술 (An acoustic Doppler-based silent speech interface technology using generative adversarial networks)

  • 이기승
    • 한국음향학회지
    • /
    • 제40권2호
    • /
    • pp.161-168
    • /
    • 2021
  • 본 논문에서는 발성하고 있는 입 주변에 40 kHz의 주파수를 갖는 초음파 신호를 방사하고 되돌아오는 신호의 도플러 변이를 검출하여 발성음을 합성하는 무 음성 대화기술을 제안하였다. 무음성 대화 기술에서는 비 음성 신호로 부터 추출된 특징변수와 해당 음성 신호의 파라메터 간 대응 규칙을 생성하고 이를 이용하여 음성신호를 합성하게 된다. 기존의 무 음성 대화기술에서는 추정된 음성 파라메터와 실제 음성 파라메터간의 오차가 최소화되도록 대응규칙을 생성한다. 본 연구에서는 추정 음성 파라메터가 실제 음성 파라메터의 분포와 유사하도록 생성적 적대 신경망을 도입하여 대응 규칙을 생성하도록 하였다. 60개 한국어 음성을 대상으로 한 실험에서 제안된 기법은 객관적, 주관적 지표상 으로 기존의 신경망 기반 기법보다 우수한 성능을 나타내었다.

GAN 신경망을 통한 자각적 사진 향상 (Perceptual Photo Enhancement with Generative Adversarial Networks)

  • 궐월;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.

Face Recognition Research Based on Multi-Layers Residual Unit CNN Model

  • Zhang, Ruyang;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제25권11호
    • /
    • pp.1582-1590
    • /
    • 2022
  • Due to the situation of the widespread of the coronavirus, which causes the problem of lack of face image data occluded by masks at recent time, in order to solve the related problems, this paper proposes a method to generate face images with masks using a combination of generative adversarial networks and spatial transformation networks based on CNN model. The system we proposed in this paper is based on the GAN, combined with multi-scale convolution kernels to extract features at different details of the human face images, and used Wasserstein divergence as the measure of the distance between real samples and synthetic samples in order to optimize Generator performance. Experiments show that the proposed method can effectively put masks on face images with high efficiency and fast reaction time and the synthesized human face images are pretty natural and real.

생성적 적대 신경망을 활용한 다른 그림 찾기 생성 시스템 (Spot The Difference Generation System Using Generative Adversarial Networks)

  • 송성헌;문미경;최봉준
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2021년도 제64차 하계학술대회논문집 29권2호
    • /
    • pp.673-674
    • /
    • 2021
  • 본 논문은 집중력 향상 놀이인 다른 그림 찾기를 자신이 좋아하는 주제를 배경으로 쉽게 생성할 수 있는 시스템을 제안한다. 아동기에 주로 진단이 되고 성인기까지 이어질 수 있는 주의력 결핍 과다활동 증후군(ADHD)을 조기에 예방하기 위해 본 논문에서는 선택한 그림의 일부분을 가지고 생성적 적대 신경망을 활용하여 새로운 물체를 생성해 낸 뒤 자연스럽게 원본 그림에 융화될 수 있도록 하는 것이 목표이다. 하나의 다른 그림 찾기 콘텐츠를 만드는 것은 포토샵과 같이 전문성을 가진 툴을 전문가가 오랜 시간 작업해야 하는 내용이다. 전문적인 기술이 필요한 작업 과정을 본 연구를 통해 일반인도 쉽게 작업할 수 있도록 하는 것을 최종 목표로 한다.

  • PDF

GAN을 이용한 게임 캐릭터 이미지 생성 (Game Character Image Generation Using GAN)

  • 김정기;정명준;차경애
    • 대한임베디드공학회논문지
    • /
    • 제18권5호
    • /
    • pp.241-248
    • /
    • 2023
  • GAN (Generative Adversarial Networks) creates highly sophisticated counterfeit products by learning real images or text and inferring commonalities. Therefore, it can be useful in fields that require the creation of large-scale images or graphics. In this paper, we implement GAN-based game character creation AI that can dramatically reduce illustration design work costs by providing expansion and automation of game character image creation. This is very efficient in game development as it allows mass production of various character images at low cost.

Flaw Detection in LCD Manufacturing Using GAN-based Data Augmentation

  • Jingyi Li;Yan Li;Zuyu Zhang;Byeongseok Shin
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.124-125
    • /
    • 2023
  • Defect detection during liquid crystal display (LCD) manufacturing has always been a critical challenge. This study aims to address this issue by proposing a data augmentation method based on generative adversarial networks (GAN) to improve defect identification accuracy in LCD production. By leveraging synthetically generated image data from GAN, we effectively augment the original dataset to make it more representative and diverse. This data augmentation strategy enhances the model's generalization capability and robustness on real-world data. Compared to traditional data augmentation techniques, the synthetic data from GAN are more realistic, diverse and broadly distributed. Experimental results demonstrate that training models with GAN-generated data combined with the original dataset significantly improves the detection accuracy of critical defects in LCD manufacturing, compared to using the original dataset alone. This study provides an effective data augmentation approach for intelligent quality control in LCD production.

Few-Shot Image Synthesis using Noise-Based Deep Conditional Generative Adversarial Nets

  • Msiska, Finlyson Mwadambo;Hassan, Ammar Ul;Choi, Jaeyoung;Yoo, Jaewon
    • 스마트미디어저널
    • /
    • 제10권1호
    • /
    • pp.79-87
    • /
    • 2021
  • In recent years research on automatic font generation with machine learning mainly focus on using transformation-based methods, in comparison, generative model-based methods of font generation have received less attention. Transformation-based methods learn a mapping of the transformations from an existing input to a target. This makes them ambiguous because in some cases a single input reference may correspond to multiple possible outputs. In this work, we focus on font generation using the generative model-based methods which learn the buildup of the characters from noise-to-image. We propose a novel way to train a conditional generative deep neural model so that we can achieve font style control on the generated font images. Our research demonstrates how to generate new font images conditioned on both character class labels and character style labels when using the generative model-based methods. We achieve this by introducing a modified generator network which is given inputs noise, character class, and style, which help us to calculate losses separately for the character class labels and character style labels. We show that adding the character style vector on top of the character class vector separately gives the model rich information about the font and enables us to explicitly specify not only the character class but also the character style that we want the model to generate.

SkelGAN: A Font Image Skeletonization Method

  • Ko, Debbie Honghee;Hassan, Ammar Ul;Majeed, Saima;Choi, Jaeyoung
    • Journal of Information Processing Systems
    • /
    • 제17권1호
    • /
    • pp.1-13
    • /
    • 2021
  • In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

사용자 인식을 위한 가상 심전도 신호 생성 기술에 관한 연구 (A Study on the Synthetic ECG Generation for User Recognition)

  • 김민구;김진수;반성범
    • 스마트미디어저널
    • /
    • 제8권4호
    • /
    • pp.33-37
    • /
    • 2019
  • 심전도 신호는 시간 및 환경 변화에 따라 측정되는 시계열 데이터로 매번 등록 데이터와 동일한 크기의 비교 데이터를 취득해야 하는 문제점이 발생한다. 본 논문에서는 신호 크기 부적합 문제를 해결하기 위해 가상 생체신호 생성을 위한 보조 분류기 기반 적대적 생성 신경망(Auxiliary Classifier Generative Adversarial Networks)의 네트워크 모델을 제안한다. 생성된 가상 생체신호의 유사성을 확인하기 위해 코사인 각도와 교차 상관관계를 이용하였다. 실험 결과, 코사인 유사도 측정 결과로 평균 유사도는 0.991의 결과를 나타냈으며, 교차 상관관계를 이용한 유클리디언 거리 기반 유사성 측정 결과는 평균 0.25 유사도 결과를 나타냈다. 이는 등록 데이터와 실험 데이터간의 크기가 일치하지 않더라도 가상 생체신호 생성을 통해 신호 크기 부적합 문제를 해결함을 확인하였다.