• Title/Summary/Keyword: 3D-GAN(3D Generative Adversarial Network)

Search Result 18, Processing Time 0.868 seconds

Context-Sensitive Spelling Error Correction Techniques in Korean Documents using Generative Adversarial Network (생성적 적대 신경망(GAN)을 이용한 한국어 문서에서의 문맥의존 철자오류 교정)

  • Lee, Jung-Hun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1391-1402
    • /
    • 2021
  • This paper focuses use context-sensitive spelling error correction using generative adversarial network. Generative adversarial network[1] are attracting attention as they solve data generation problems that have been a challenge in the field of deep learning. In this paper, sentences are generated using word embedding information and reflected in word distribution representation. We experiment with DCGAN[2] used for the stability of learning in the existing image processing and D2GAN[3] with double discriminator. In this paper, we experimented with how the composition of generative adversarial networks and the change of learning corpus influence the context-sensitive spelling error correction In the experiment, we correction the generated word embedding information and compare the performance with the actual word embedding information.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Applications of Generative Adversarial Networks (Generative Adversarial Networks의 응용 현황)

  • Kim, Dong-Wook;Kim, Sesong;Jung, Seung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.807-809
    • /
    • 2017
  • Generative adversarial networks (GAN)에 대한 간략하게 설명하고, MNIST (숫자 손 글씨 데이터 셋)를 이용한 간단한 실험을 통해 GAN 구조 구조의 이해를 돕는다. 그리고 GAN이 어떻게 응용이 되고있는지 다양한 논문들을 통해 살펴본다. 본 고에서는 GAN 논문들을 크게 이미지 스타일 변경, 3D 오브젝트 추정, 손상된 이미지 복원, 언어의 시각화, 기타 등으로 분류하였다.

3D Point Cloud Enhancement based on Generative Adversarial Network (생성적 적대 신경망 기반 3차원 포인트 클라우드 향상 기법)

  • Moon, HyungDo;Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1452-1455
    • /
    • 2021
  • Recently, point clouds are generated by capturing real space in 3D, and it is actively applied and serviced for performances, exhibitions, education, and training. These point cloud data require post-correction work to be used in virtual environments due to errors caused by the capture environment with sensors and cameras. In this paper, we propose an enhancement technique for 3D point cloud data by applying generative adversarial network(GAN). Thus, we performed an approach to regenerate point clouds as an input of GAN. Through our method presented in this paper, point clouds with a lot of noise is configured in the same shape as the real object and environment, enabling precise interaction with the reconstructed content.

Extraction of Line Drawing From Cartoon Painting Using Generative Adversarial Network (Generative Adversarial Network를 이용한 카툰 원화의 라인 드로잉 추출)

  • Yu, Kyung Ho;Yang, Hee Deok
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.30-37
    • /
    • 2021
  • Recently, 3D contents used in various fields have been attracting people's attention due to the development of virtual reality and augmented reality technology. In order to produce 3D contents, it is necessary to model the objects as vertices. However, high-quality modeling is time-consuming and costly. In order to convert a 2D character into a 3D model, it is necessary to express it as line drawings through feature line extraction. The extraction of consistent line drawings from 2D cartoon cartoons is difficult because the styles and techniques differ depending on the designer who produces them. Therefore, it is necessary to extract the line drawings that show the geometrical characteristics well in 2D cartoon shapes of various styles. This study proposes a method of automatically extracting line drawings. The 2D Cartoon shading image and line drawings are learned by using adversarial network model, which is artificial intelligence technology and outputs 2D cartoon artwork of various styles. Experimental results show the proposed method in this research can be obtained as a result of the line drawings representing the geometric characteristics when a 2D cartoon painting as input.

Face Morphing Using Generative Adversarial Networks (Generative Adversarial Networks를 이용한 Face Morphing 기법 연구)

  • Han, Yoon;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.435-443
    • /
    • 2018
  • Recently, with the explosive development of computing power, various methods such as RNN and CNN have been proposed under the name of Deep Learning, which solve many problems of Computer Vision have. The Generative Adversarial Network, released in 2014, showed that the problem of computer vision can be sufficiently solved in unsupervised learning, and the generation domain can also be studied using learned generators. GAN is being developed in various forms in combination with various models. Machine learning has difficulty in collecting data. If it is too large, it is difficult to refine the effective data set by removing the noise. If it is too small, the small difference becomes too big noise, and learning is not easy. In this paper, we apply a deep CNN model for extracting facial region in image frame to GAN model as a preprocessing filter, and propose a method to produce composite images of various facial expressions by stably learning with limited collection data of two persons.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.

A study on evaluation method of NIDS datasets in closed military network (군 폐쇄망 환경에서의 모의 네트워크 데이터 셋 평가 방법 연구)

  • Park, Yong-bin;Shin, Sung-uk;Lee, In-sup
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.121-130
    • /
    • 2020
  • This paper suggests evaluating the military closed network data as an image which is generated by Generative Adversarial Network (GAN), applying an image evaluation method such as the InceptionV3 model-based Inception Score (IS) and Frechet Inception Distance (FID). We employed the famous image classification models instead of the InceptionV3, added layers to those models, and converted the network data to an image in diverse ways. Experimental results show that the Densenet121 model with one added Dense Layer achieves the best performance in data converted using the arctangent algorithm and 8 * 8 size of the image.

Reconstructing the cosmic density field based on the generative adversarial network.

  • Shi, Feng
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.50.1-50.1
    • /
    • 2020
  • In this topic, I will introduce a recent work on reconstructing the cosmic density field based on the GAN. I will show the performance of the GAN compared to the traditional Unet architecture. I'd also like to discuss a 3-channels-based 2D datasets for the training to recover the 3D density field. Finally, I will present some performance tests based on the test datasets.

  • PDF

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.