• Title/Summary/Keyword: Generative adversarial neural networks

Search Result 53, Processing Time 0.024 seconds

A research on the possibility of restoring cultural assets of artificial intelligence through the application of artificial neural networks to roof tile(Wadang)

  • Kim, JunO;Lee, Byong-Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.19-26
    • /
    • 2021
  • Cultural assets excavated in historical areas have their own characteristics based on the background of the times, and it can be seen that their patterns and characteristics change little by little according to the history and the flow of the spreading area. Cultural properties excavated in some areas represent the culture of the time and some maintain their intact appearance, but most of them are damaged/lost or divided into parts, and many experts are mobilized to research the composition and repair the damaged parts. The purpose of this research is to learn patterns and characteristics of the past through artificial intelligence neural networks for such restoration research, and to restore the lost parts of the excavated cultural assets based on Generative Adversarial Network(GAN)[1]. The research is a process in which the rest of the damaged/lost parts are restored based on some of the cultural assets excavated based on the GAN. To recover some parts of dammed of cultural asset, through training with the 2D image of a complete cultural asset. This research is focused on how much recovered not only damaged parts but also reproduce colors and materials. Finally, through adopted this trained neural network to real damaged cultural, confirmed area of recovered area and limitation.

Hyperparameter Optimization and Data Augmentation of Artificial Neural Networks for Prediction of Ammonia Emission Amount from Field-applied Manure (토양에 살포된 축산 분뇨로부터 암모니아 방출량 예측을 위한 인공신경망의 초매개변수 최적화와 데이터 증식)

  • Pyeong-Gon Jung;Young-Il Lim
    • Korean Chemical Engineering Research
    • /
    • v.61 no.1
    • /
    • pp.123-141
    • /
    • 2023
  • A sufficient amount of data with quality is needed for training artificial neural networks (ANNs). However, developing ANN models with a small amount of data often appears in engineering fields. This paper presented an ANN model to improve prediction performance of the ammonia emission amount with 83 data. The ammonia emission rate included eleven inputs and two outputs (maximum ammonia loss, Nmax and time to reach half of Nmax, Km). Categorical input variables were transformed into multi-dimensional equal-distance variables, and 13 data were added into 66 training data using a generative adversarial network. Hyperparameters (number of layers, number of neurons, and activation function) of ANN were optimized using Gaussian process. Using 17 test data, the previous ANN model (Lim et al., 2007) showed the mean absolute error (MAE) of Km and Nmax to 0.0668 and 0.1860, respectively. The present ANN outperformed the previous model, reducing MAE by 38% and 56%.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.

Application of Deep Learning: A Review for Firefighting

  • Shaikh, Muhammad Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.73-78
    • /
    • 2022
  • The aim of this paper is to investigate the prevalence of Deep Learning in the literature on Fire & Rescue Service. It is found that deep learning techniques are only beginning to benefit the firefighters. The popular areas where deep learning techniques are making an impact are situational awareness, decision making, mental stress, injuries, well-being of the firefighter such as his sudden fall, inability to move and breathlessness, path planning by the firefighters while getting to an fire scene, wayfinding, tracking firefighters, firefighter physical fitness, employment, prediction of firefighter intervention, firefighter operations such as object recognition in smoky areas, firefighter efficacy, smart firefighting using edge computing, firefighting in teams, and firefighter clothing and safety. The techniques that were found applied in firefighting were Deep learning, Traditional K-Means clustering with engineered time and frequency domain features, Convolutional autoencoders, Long Short-Term Memory (LSTM), Deep Neural Networks, Simulation, VR, ANN, Deep Q Learning, Deep learning based on conditional generative adversarial networks, Decision Trees, Kalman Filters, Computational models, Partial Least Squares, Logistic Regression, Random Forest, Edge computing, C5 Decision Tree, Restricted Boltzmann Machine, Reinforcement Learning, and Recurrent LSTM. The literature review is centered on Firefighters/firemen not involved in wildland fires. The focus was also not on the fire itself. It must also be noted that several deep learning techniques such as CNN were mostly used in fire behavior, fire imaging and identification as well. Those papers that deal with fire behavior were also not part of this literature review.

Multidimensional data generation of water distribution systems using adversarially trained autoencoder (적대적 학습 기반 오토인코더(ATAE)를 이용한 다차원 상수도관망 데이터 생성)

  • Kim, Sehyeong;Jun, Sanghoon;Jung, Donghwi
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.7
    • /
    • pp.439-449
    • /
    • 2023
  • Recent advancements in data measuring technology have facilitated the installation of various sensors, such as pressure meters and flow meters, to effectively assess the real-time conditions of water distribution systems (WDSs). However, as cities expand extensively, the factors that impact the reliability of measurements have become increasingly diverse. In particular, demand data, one of the most significant hydraulic variable in WDS, is challenging to be measured directly and is prone to missing values, making the development of accurate data generation models more important. Therefore, this paper proposes an adversarially trained autoencoder (ATAE) model based on generative deep learning techniques to accurately estimate demand data in WDSs. The proposed model utilizes two neural networks: a generative network and a discriminative network. The generative network generates demand data using the information provided from the measured pressure data, while the discriminative network evaluates the generated demand outputs and provides feedback to the generator to learn the distinctive features of the data. To validate its performance, the ATAE model is applied to a real distribution system in Austin, Texas, USA. The study analyzes the impact of data uncertainty by calculating the accuracy of ATAE's prediction results for varying levels of uncertainty in the demand and the pressure time series data. Additionally, the model's performance is evaluated by comparing the results for different data collection periods (low, average, and high demand hours) to assess its ability to generate demand data based on water consumption levels.

Waste Classification by Fine-Tuning Pre-trained CNN and GAN

  • Alsabei, Amani;Alsayed, Ashwaq;Alzahrani, Manar;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.65-70
    • /
    • 2021
  • Waste accumulation is becoming a significant challenge in most urban areas and if it continues unchecked, is poised to have severe repercussions on our environment and health. The massive industrialisation in our cities has been followed by a commensurate waste creation that has become a bottleneck for even waste management systems. While recycling is a viable solution for waste management, it can be daunting to classify waste material for recycling accurately. In this study, transfer learning models were proposed to automatically classify wastes based on six materials (cardboard, glass, metal, paper, plastic, and trash). The tested pre-trained models were ResNet50, VGG16, InceptionV3, and Xception. Data augmentation was done using a Generative Adversarial Network (GAN) with various image generation percentages. It was found that models based on Xception and VGG16 were more robust. In contrast, models based on ResNet50 and InceptionV3 were sensitive to the added machine-generated images as the accuracy degrades significantly compared to training with no artificial data.

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF

Traffic Data Generation Technique for Improving Network Attack Detection Using Deep Learning (네트워크 공격 탐지 성능향상을 위한 딥러닝을 이용한 트래픽 데이터 생성 연구)

  • Lee, Wooho;Hahm, Jaegyoon;Jung, Hyun Mi;Jeong, Kimoon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, various approaches to detect network attacks using machine learning have been studied and are being applied to detect new attacks and to increase precision. However, the machine learning method is dependent on feature extraction and takes a long time and complexity. It also has limitation of performace due to learning data imbalance. In this study, we propose a method to solve the degradation of classification performance due to imbalance of learning data among the limit points of detection system. To do this, we generate data using Generative Adversarial Networks (GANs) and propose a classification method using Convolutional Neural Networks (CNNs). Through this approach, we can confirm that the accuracy is improved when applied to the NSL-KDD and UNSW-NB15 datasets.

Comparison and analysis of prediction performance of fine particulate matter(PM2.5) based on deep learning algorithm (딥러닝 알고리즘 기반의 초미세먼지(PM2.5) 예측 성능 비교 분석)

  • Kim, Younghee;Chang, Kwanjong
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.7-13
    • /
    • 2021
  • This study develops an artificial intelligence prediction system for Fine particulate Matter(PM2.5) based on the deep learning algorithm GAN model. The experimental data are closely related to the changes in temperature, humidity, wind speed, and atmospheric pressure generated by the time series axis and the concentration of air pollutants such as SO2, CO, O3, NO2, and PM10. Due to the characteristics of the data, since the concentration at the current time is affected by the concentration at the previous time, a predictive model for recursive supervised learning was applied. For comparative analysis of the accuracy of the existing models, CNN and LSTM, the difference between observation value and prediction value was analyzed and visualized. As a result of performance analysis, it was confirmed that the proposed GAN improved to 15.8%, 10.9%, and 5.5% in the evaluation items RMSE, MAPE, and IOA compared to LSTM, respectively.