• Title/Summary/Keyword: GAN(Generative Adversarial Network

Search Result 176, Processing Time 0.024 seconds

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Hyperparameter Optimization and Data Augmentation of Artificial Neural Networks for Prediction of Ammonia Emission Amount from Field-applied Manure (토양에 살포된 축산 분뇨로부터 암모니아 방출량 예측을 위한 인공신경망의 초매개변수 최적화와 데이터 증식)

  • Pyeong-Gon Jung;Young-Il Lim
    • Korean Chemical Engineering Research
    • /
    • v.61 no.1
    • /
    • pp.123-141
    • /
    • 2023
  • A sufficient amount of data with quality is needed for training artificial neural networks (ANNs). However, developing ANN models with a small amount of data often appears in engineering fields. This paper presented an ANN model to improve prediction performance of the ammonia emission amount with 83 data. The ammonia emission rate included eleven inputs and two outputs (maximum ammonia loss, Nmax and time to reach half of Nmax, Km). Categorical input variables were transformed into multi-dimensional equal-distance variables, and 13 data were added into 66 training data using a generative adversarial network. Hyperparameters (number of layers, number of neurons, and activation function) of ANN were optimized using Gaussian process. Using 17 test data, the previous ANN model (Lim et al., 2007) showed the mean absolute error (MAE) of Km and Nmax to 0.0668 and 0.1860, respectively. The present ANN outperformed the previous model, reducing MAE by 38% and 56%.

A study on the prediction of aquatic ecosystem health grade in ungauged rivers through the machine learning model based on GAN data (GAN 데이터 기반의 머신러닝 모델을 통한 미계측 하천에서의 수생태계 건강성 등급 예측 방안 연구)

  • Lee, Seoro;Lee, Jimin;Lee, Gwanjae;Kim, Jonggun;Lim, Kyoung Jae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.448-448
    • /
    • 2021
  • 최근 급격한 기후변화와 도시화 및 산업화로 인한 지류하천에서의 수량과 수질의 변동은 생물 다양성 감소와 수생태계 건강성 저하에 큰 영향을 미치고 있다. 효율적인 수생태 관리를 위해서는 지속적인 유량, 수질, 그리고 수생태 모니터링을 통한 데이터 축적과 더불어 면밀한 상관 분석을 통해 수생태계 건강성의 악화 원인을 규명해야 할 필요가 있다. 그러나 수많은 지류하천을 대상으로 한 지속적인 모니터링은 현실적으로 어려움이 있으며, 수생태계의 특성 상 단일 영향 인자만으로 수생태계의 건강성 변화와의 관계를 정확히 파악하는데 한계가 있다. 따라서 지류하천에서의 유량 및 수질의 시공간적인 변동성과 다양한 영향 인자를 고려하여 수생태계의 건강성을 효율적으로 예측할 수 있는 기술이 필요하다. 이에 본 연구에서는 경험적 데이터 기반의 머신러닝 모델 구축을 통해 미계측 하천에서의 수생태계 건강성 지수(BMI, TDI, FAI)의 등급(A to E)을 예측하고자 하였다. 머신러닝 모델은 학습 데이터셋의 양과 질에 따라 성능이 크게 달라질 수 있으며, 학습 데이터셋의 분포가 불균형적일 경우 과적합 또는 과소적합 문제가 발생할 수 있다. 이를 보완하고자 본 연구에서는 실제 측정망 데이터셋을 바탕으로 생성적 적대 신경망 GAN(Generative Adversarial Network) 알고리즘을 통해 머신러닝 모델 학습에 필요한 추가 데이터셋(유량, 수질, 기상, 수생태 등급)을 확보하였다. 머신러닝 모델의 성능은 5차 교차검증 과정을 통해 평가하였으며, GAN 데이터셋의 정확도는 실제 측정망 데이터셋의 정규분포와의 비교 분석을 통해 평가하였다. 최종적으로 SWAT(Soil and Water Assessment Tool) 모형을 통해 예측 된 미계측 하천에서의 데이터셋을 머신러닝 모델의 검증 자료로 사용하여 수생태계 건강성 등급 예측 정확도를 평가하였다. 본 연구에서의 GAN에 의해 강화된 머신러닝 모델은 수질 및 수생태 관리가 필요한 우심 지류하천 선정과 구조적/비구조적 최적관리기법에 따른 수생태계 건강성 개선 효과를 평가하는데 활용될 수 있을 것이다. 또한 이를 통해 예측된 미계측 하천에서의 수생태계 건강성 등급 자료는 수량-수질-수생태를 유기적으로 연계한 통합 물관리 정책을 수립하는데 기초자료로 활용될 수 있을 것이라 사료된다.

  • PDF

Challenges of diet planning for children using artificial intelligence

  • Changhun, Lee;Soohyeok, Kim;Jayun, Kim;Chiehyeon, Lim;Minyoung, Jung
    • Nutrition Research and Practice
    • /
    • v.16 no.6
    • /
    • pp.801-812
    • /
    • 2022
  • BACKGROUND/OBJECTIVES: Diet planning in childcare centers is difficult because of the required knowledge of nutrition and development as well as the high design complexity associated with large numbers of food items. Artificial intelligence (AI) is expected to provide diet-planning solutions via automatic and effective application of professional knowledge, addressing the complexity of optimal diet design. This study presents the results of the evaluation of the utility of AI-generated diets for children and provides related implications. MATERIALS/METHODS: We developed 2 AI solutions for children aged 3-5 yrs using a generative adversarial network (GAN) model and a reinforcement learning (RL) framework. After training these solutions to produce daily diet plans, experts evaluated the human- and AI-generated diets in 2 steps. RESULTS: In the evaluation of adequacy of nutrition, where experts were provided only with nutrient information and no food names, the proportion of strong positive responses to RL-generated diets was higher than that of the human- and GAN-generated diets (P < 0.001). In contrast, in terms of diet composition, the experts' responses to human-designed diets were more positive when experts were provided with food name information (i.e., composition information). CONCLUSIONS: To the best of our knowledge, this is the first study to demonstrate the development and evaluation of AI to support dietary planning for children. This study demonstrates the possibility of developing AI-assisted diet planning methods for children and highlights the importance of composition compliance in diet planning. Further integrative cooperation in the fields of nutrition, engineering, and medicine is needed to improve the suitability of our proposed AI solutions and benefit children's well-being by providing high-quality diet planning in terms of both compositional and nutritional criteria.

De-Identified Face Image Generation within Face Verification for Privacy Protection (프라이버시 보호를 위한 얼굴 인증이 가능한 비식별화 얼굴 이미지 생성 연구)

  • Jung-jae Lee;Hyun-sik Na;To-min Ok;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.201-210
    • /
    • 2023
  • Deep learning-based face verificattion model show high performance and are used in many fields, but there is a possibility the user's face image may be leaked in the process of inputting the face image to the model. Althoughde-identification technology exists as a method for minimizing the exposure of face features, there is a problemin that verification performance decreases when the existing technology is applied. In this paper, after combining the face features of other person, a de-identified face image is created through StyleGAN. In addition, we propose a method of optimizingthe combining ratio of features according to the face verification model using HopSkipJumpAttack. We visualize the images generated by the proposed method to check the de-identification performance, and evaluate the ability to maintain the performance of the face verification model through experiments. That is, face verification can be performed using the de-identified image generated through the proposed method, and leakage of face personal information can be prevented.

Deep survey using deep learning: generative adversarial network

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.78.1-78.1
    • /
    • 2019
  • There are a huge number of faint objects that have not been observed due to the lack of large and deep surveys. In this study, we demonstrate that a deep learning approach can produce a better quality deep image from a single pass imaging so that could be an alternative of conventional image stacking technique or the expensive large and deep surveys. Using data from the Sloan Digital Sky Survey (SDSS) stripe 82 which provide repeatedly scanned imaging data, a training data set is constructed: g-, r-, and i-band images of single pass data as an input and r-band co-added image as a target. Out of 151 SDSS fields that have been repeatedly scanned 34 times, 120 fields were used for training and 31 fields for validation. The size of a frame selected for the training is 1k by 1k pixel scale. To avoid possible problems caused by the small number of training sets, frames are randomly selected within that field each iteration of training. Every 5000 iterations of training, the performance were evaluated with RMSE, peak signal-to-noise ratio which is given on logarithmic scale, structural symmetry index (SSIM) and difference in SSIM. We continued the training until a GAN model with the best performance is found. We apply the best GAN-model to NGC0941 located in SDSS stripe 82. By comparing the radial surface brightness and photometry error of images, we found the possibility that this technique could generate a deep image with statistics close to the stacked image from a single-pass image.

  • PDF

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.8
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

A COVID-19 Chest X-ray Reading Technique based on Deep Learning (딥 러닝 기반 코로나19 흉부 X선 판독 기법)

  • Ann, Kyung-Hee;Ohm, Seong-Yong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.789-795
    • /
    • 2020
  • Many deaths have been reported due to the worldwide pandemic of COVID-19. In order to prevent the further spread of COVID-19, it is necessary to quickly and accurately read images of suspected patients and take appropriate measures. To this end, this paper introduces a deep learning-based COVID-19 chest X-ray reading technique that can assist in image reading by providing medical staff whether a patient is infected. First of all, in order to learn the reading model, a sufficient dataset must be secured, but the currently provided COVID-19 open dataset does not have enough image data to ensure the accuracy of learning. Therefore, we solved the image data number imbalance problem that degrades AI learning performance by using a Stacked Generative Adversarial Network(StackGAN++). Next, the DenseNet-based classification model was trained using the augmented data set to develop the reading model. This classification model is a model for binary classification of normal chest X-ray and COVID-19 chest X-ray, and the performance of the model was evaluated using part of the actual image data as test data. Finally, the reliability of the model was secured by presenting the basis for judging the presence or absence of disease in the input image using Grad-CAM, one of the explainable artificial intelligence called XAI.