• Title/Summary/Keyword: Image generative model

Search Result 118, Processing Time 0.025 seconds

Injection of Cultural-based Subjects into Stable Diffusion Image Generative Model

  • Amirah Alharbi;Reem Alluhibi;Maryam Saif;Nada Altalhi;Yara Alharthi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.1-14
    • /
    • 2024
  • While text-to-image models have made remarkable progress in image synthesis, certain models, particularly generative diffusion models, have exhibited a noticeable bias to- wards generating images related to the culture of some developing countries. This paper introduces an empirical investigation aimed at mitigating the bias of image generative model. We achieve this by incorporating symbols representing Saudi culture into a stable diffusion model using the Dreambooth technique. CLIP score metric is used to assess the outcomes in this study. This paper also explores the impact of varying parameters for instance the quantity of training images and the learning rate. The findings reveal a substantial reduction in bias-related concerns and propose an innovative metric for evaluating cultural relevance.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Few-Shot Image Synthesis using Noise-Based Deep Conditional Generative Adversarial Nets

  • Msiska, Finlyson Mwadambo;Hassan, Ammar Ul;Choi, Jaeyoung;Yoo, Jaewon
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.79-87
    • /
    • 2021
  • In recent years research on automatic font generation with machine learning mainly focus on using transformation-based methods, in comparison, generative model-based methods of font generation have received less attention. Transformation-based methods learn a mapping of the transformations from an existing input to a target. This makes them ambiguous because in some cases a single input reference may correspond to multiple possible outputs. In this work, we focus on font generation using the generative model-based methods which learn the buildup of the characters from noise-to-image. We propose a novel way to train a conditional generative deep neural model so that we can achieve font style control on the generated font images. Our research demonstrates how to generate new font images conditioned on both character class labels and character style labels when using the generative model-based methods. We achieve this by introducing a modified generator network which is given inputs noise, character class, and style, which help us to calculate losses separately for the character class labels and character style labels. We show that adding the character style vector on top of the character class vector separately gives the model rich information about the font and enables us to explicitly specify not only the character class but also the character style that we want the model to generate.

Transforming Text into Video: A Proposed Methodology for Video Production Using the VQGAN-CLIP Image Generative AI Model

  • SukChang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.225-230
    • /
    • 2023
  • With the development of AI technology, there is a growing discussion about Text-to-Image Generative AI. We presented a Generative AI video production method and delineated a methodology for the production of personalized AI-generated videos with the objective of broadening the landscape of the video domain. And we meticulously examined the procedural steps involved in AI-driven video production and directly implemented a video creation approach utilizing the VQGAN-CLIP model. The outcomes produced by the VQGAN-CLIP model exhibited a relatively moderate resolution and frame rate, and predominantly manifested as abstract images. Such characteristics indicated potential applicability in OTT-based video content or the realm of visual arts. It is anticipated that AI-driven video production techniques will see heightened utilization in forthcoming endeavors.

Generative Adversarial Networks for single image with high quality image

  • Zhao, Liquan;Zhang, Yupeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4326-4344
    • /
    • 2021
  • The SinGAN is one of generative adversarial networks that can be trained on a single nature image. It has poor ability to learn more global features from nature image, and losses much local detail information when it generates arbitrary size image sample. To solve the problem, a non-linear function is firstly proposed to control downsampling ratio that is ratio between the size of current image and the size of next downsampled image, to increase the ratio with increase of the number of downsampling. This makes the low-resolution images obtained by downsampling have higher proportion in all downsampled images. The low-resolution images usually contain much global information. Therefore, it can help the model to learn more global feature information from downsampled images. Secondly, the attention mechanism is introduced to the generative network to increase the weight of effective image information. This can make the network learn more local details. Besides, in order to make the output image more natural, the TVLoss function is introduced to the loss function of SinGAN, to reduce the difference between adjacent pixels and smear phenomenon for the output image. A large number of experimental results show that our proposed model has better performance than other methods in generating random samples with fixed size and arbitrary size, image harmonization and editing.

Development of a Shoe Recommendation Model for Matching Outfits Using Generative Artificial Intelligence (생성형 인공지능을 활용한 신발 추천 모델 개발)

  • Jun Woo CHOI
    • Journal of Korean Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.7-10
    • /
    • 2023
  • This study proposes an AI-based shoe recommendation model based on user clothing image data to solve the problem of the global fashion industry, which is worsening due to factors such as the economic downturn. Shoes are an important part of modern fashion, and this research aims to improve user satisfaction and contribute to economic growth through a generative AI-based shoe recommendation service. By utilizing generative AI in the personalized consumer market, we show the feasibility, efficiency, and improvements through an accessible web-based implementation. In conclusion, this study provides insights to help fulfill consumer needs in the ever-changing fashion market by implementing a generative AI-based shoe recommendation model.

A Study on Super Resolution Image Reconstruction for Acquired Images from Naval Combat System using Generative Adversarial Networks (생성적 적대 신경망을 이용한 함정전투체계 획득 영상의 초고해상도 영상 복원 연구)

  • Kim, Dongyoung
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1197-1205
    • /
    • 2018
  • In this paper, we perform Single Image Super Resolution(SISR) for acquired images of EOTS or IRST from naval combat system. In order to conduct super resolution, we use Generative Adversarial Networks(GANs), which consists of a generative model to create a super-resolution image from the given low-resolution image and a discriminative model to determine whether the generated super-resolution image is qualified as a high-resolution image by adjusting various learning parameters. The learning parameters consist of a crop size of input image, the depth of sub-pixel layer, and the types of training images. Regarding evaluation method, we apply not only general image quality metrics, but feature descriptor methods. As a result, a larger crop size, a deeper sub-pixel layer, and high-resolution training images yield good performance.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

An Edge Detection Technique for Performance Improvement of eGAN (eGAN 모델의 성능개선을 위한 에지 검출 기법)

  • Lee, Cho Youn;Park, Ji Su;Shon, Jin Gon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.109-114
    • /
    • 2021
  • GAN(Generative Adversarial Network) is an image generation model, which is composed of a generator network and a discriminator network, and generates an image similar to a real image. Since the image generated by the GAN should be similar to the actual image, a loss function is used to minimize the loss error of the generated image. However, there is a problem that the loss function of GAN degrades the quality of the image by making the learning to generate the image unstable. To solve this problem, this paper analyzes GAN-related studies and proposes an edge GAN(eGAN) using edge detection. As a result of the experiment, the eGAN model has improved performance over the existing GAN model.

Deep Learning-based Single Image Generative Adversarial Network: Performance Comparison and Trends (딥러닝 기반 단일 이미지 생성적 적대 신경망 기법 비교 분석)

  • Jeong, Seong-Hun;Kong, Kyeongbo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.437-450
    • /
    • 2022
  • Generative adversarial networks(GANs) have demonstrated remarkable success in image synthesis. However, since GANs show instability in the training stage on large datasets, it is difficult to apply to various application fields. A single image GAN is a field that generates various images by learning the internal distribution of a single image. In this paper, we investigate five Single Image GAN: SinGAN, ConSinGAN, InGAN, DeepSIM, and One-Shot GAN. We compare the performance of each model and analyze the pros and cons of a single image GAN.