• Title/Summary/Keyword: adversarial network

Search Result 279, Processing Time 0.027 seconds

Comparison and analysis of prediction performance of fine particulate matter(PM2.5) based on deep learning algorithm (딥러닝 알고리즘 기반의 초미세먼지(PM2.5) 예측 성능 비교 분석)

  • Kim, Younghee;Chang, Kwanjong
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.7-13
    • /
    • 2021
  • This study develops an artificial intelligence prediction system for Fine particulate Matter(PM2.5) based on the deep learning algorithm GAN model. The experimental data are closely related to the changes in temperature, humidity, wind speed, and atmospheric pressure generated by the time series axis and the concentration of air pollutants such as SO2, CO, O3, NO2, and PM10. Due to the characteristics of the data, since the concentration at the current time is affected by the concentration at the previous time, a predictive model for recursive supervised learning was applied. For comparative analysis of the accuracy of the existing models, CNN and LSTM, the difference between observation value and prediction value was analyzed and visualized. As a result of performance analysis, it was confirmed that the proposed GAN improved to 15.8%, 10.9%, and 5.5% in the evaluation items RMSE, MAPE, and IOA compared to LSTM, respectively.

A Study on the Image Preprosessing model linkage method for usability of Pix2Pix (Pix2Pix의 활용성을 위한 학습이미지 전처리 모델연계방안 연구)

  • Kim, Hyo-Kwan;Hwang, Won-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.380-386
    • /
    • 2022
  • This paper proposes a method for structuring the preprocessing process of a training image when color is applied using Pix2Pix, one of the adversarial generative neural network techniques. This paper concentrate on the prediction result can be damaged according to the degree of light reflection of the training image. Therefore, image preprocesisng and parameters for model optimization were configured before model application. In order to increase the image resolution of training and prediction results, it is necessary to modify the of the model so this part is designed to be tuned with parameters. In addition, in this paper, the logic that processes only the part where the prediction result is damaged by light reflection is configured together, and the pre-processing logic that does not distort the prediction result is also configured.Therefore, in order to improve the usability, the accuracy was improved through experiments on the part that applies the light reflection tuning filter to the training image of the Pix2Pix model and the parameter configuration.

Challenges of diet planning for children using artificial intelligence

  • Changhun, Lee;Soohyeok, Kim;Jayun, Kim;Chiehyeon, Lim;Minyoung, Jung
    • Nutrition Research and Practice
    • /
    • v.16 no.6
    • /
    • pp.801-812
    • /
    • 2022
  • BACKGROUND/OBJECTIVES: Diet planning in childcare centers is difficult because of the required knowledge of nutrition and development as well as the high design complexity associated with large numbers of food items. Artificial intelligence (AI) is expected to provide diet-planning solutions via automatic and effective application of professional knowledge, addressing the complexity of optimal diet design. This study presents the results of the evaluation of the utility of AI-generated diets for children and provides related implications. MATERIALS/METHODS: We developed 2 AI solutions for children aged 3-5 yrs using a generative adversarial network (GAN) model and a reinforcement learning (RL) framework. After training these solutions to produce daily diet plans, experts evaluated the human- and AI-generated diets in 2 steps. RESULTS: In the evaluation of adequacy of nutrition, where experts were provided only with nutrient information and no food names, the proportion of strong positive responses to RL-generated diets was higher than that of the human- and GAN-generated diets (P < 0.001). In contrast, in terms of diet composition, the experts' responses to human-designed diets were more positive when experts were provided with food name information (i.e., composition information). CONCLUSIONS: To the best of our knowledge, this is the first study to demonstrate the development and evaluation of AI to support dietary planning for children. This study demonstrates the possibility of developing AI-assisted diet planning methods for children and highlights the importance of composition compliance in diet planning. Further integrative cooperation in the fields of nutrition, engineering, and medicine is needed to improve the suitability of our proposed AI solutions and benefit children's well-being by providing high-quality diet planning in terms of both compositional and nutritional criteria.

Assessment and Analysis of Fidelity and Diversity for GAN-based Medical Image Generative Model (GAN 기반 의료영상 생성 모델에 대한 품질 및 다양성 평가 및 분석)

  • Jang, Yoojin;Yoo, Jaejun;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.11-19
    • /
    • 2022
  • Recently, various researches on medical image generation have been suggested, and it becomes crucial to accurately evaluate the quality and diversity of the generated medical images. For this purpose, the expert's visual turing test, feature distribution visualization, and quantitative evaluation through IS and FID are evaluated. However, there are few methods for quantitatively evaluating medical images in terms of fidelity and diversity. In this paper, images are generated by learning a chest CT dataset of non-small cell lung cancer patients through DCGAN and PGGAN generative models, and the performance of the two generative models are evaluated in terms of fidelity and diversity. The performance is quantitatively evaluated through IS and FID, which are one-dimensional score-based evaluation methods, and Precision and Recall, Improved Precision and Recall, which are two-dimensional score-based evaluation methods, and the characteristics and limitations of each evaluation method are also analyzed in medical imaging.

De-Identified Face Image Generation within Face Verification for Privacy Protection (프라이버시 보호를 위한 얼굴 인증이 가능한 비식별화 얼굴 이미지 생성 연구)

  • Jung-jae Lee;Hyun-sik Na;To-min Ok;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.201-210
    • /
    • 2023
  • Deep learning-based face verificattion model show high performance and are used in many fields, but there is a possibility the user's face image may be leaked in the process of inputting the face image to the model. Althoughde-identification technology exists as a method for minimizing the exposure of face features, there is a problemin that verification performance decreases when the existing technology is applied. In this paper, after combining the face features of other person, a de-identified face image is created through StyleGAN. In addition, we propose a method of optimizingthe combining ratio of features according to the face verification model using HopSkipJumpAttack. We visualize the images generated by the proposed method to check the de-identification performance, and evaluate the ability to maintain the performance of the face verification model through experiments. That is, face verification can be performed using the de-identified image generated through the proposed method, and leakage of face personal information can be prevented.

Deep survey using deep learning: generative adversarial network

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.78.1-78.1
    • /
    • 2019
  • There are a huge number of faint objects that have not been observed due to the lack of large and deep surveys. In this study, we demonstrate that a deep learning approach can produce a better quality deep image from a single pass imaging so that could be an alternative of conventional image stacking technique or the expensive large and deep surveys. Using data from the Sloan Digital Sky Survey (SDSS) stripe 82 which provide repeatedly scanned imaging data, a training data set is constructed: g-, r-, and i-band images of single pass data as an input and r-band co-added image as a target. Out of 151 SDSS fields that have been repeatedly scanned 34 times, 120 fields were used for training and 31 fields for validation. The size of a frame selected for the training is 1k by 1k pixel scale. To avoid possible problems caused by the small number of training sets, frames are randomly selected within that field each iteration of training. Every 5000 iterations of training, the performance were evaluated with RMSE, peak signal-to-noise ratio which is given on logarithmic scale, structural symmetry index (SSIM) and difference in SSIM. We continued the training until a GAN model with the best performance is found. We apply the best GAN-model to NGC0941 located in SDSS stripe 82. By comparing the radial surface brightness and photometry error of images, we found the possibility that this technique could generate a deep image with statistics close to the stacked image from a single-pass image.

  • PDF

Efforts against Cybersecurity Attack of Space Systems

  • Jin-Keun Hong
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.4
    • /
    • pp.437-445
    • /
    • 2023
  • A space system refers to a network of sensors, ground systems, and space-craft operating in space. The security of space systems relies on information systems and networks that support the design, launch, and operation of space missions. Characteristics of space operations, including command and control (C2) between space-craft (including satellites) and ground communication, also depend on wireless frequency and communication channels. Attackers can potentially engage in malicious activities such as destruction, disruption, and degradation of systems, networks, communication channels, and space operations. These malicious cyber activities include sensor spoofing, system damage, denial of service attacks, jamming of unauthorized commands, and injection of malicious code. Such activities ultimately lead to a decrease in the lifespan and functionality of space systems, and may result in damage to space-craft and, lead to loss of control. The Cybersecurity Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) matrix, proposed by Massachusetts Institute of Technology Research and Engineering (MITRE), consists of the following stages: Reconnaissance, Resource Development, Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Command & Control, Exfiltration, and Impact. This paper identifies cybersecurity activities in space systems and satellite navigation systems through the National Institute of Standards and Technology (NIST)'s standard documents, former U.S. President Trump's executive orders, and presents risk management activities. This paper also explores cybersecurity's tactics attack techniques within the context of space systems (space-craft) by referencing the Sparta ATT&CK Matrix. In this paper, security threats in space systems analyzed, focusing on the cybersecurity attack tactics, techniques, and countermeasures of space-craft presented by Space Attack Research and Tactic Analysis (SPARTA). Through this study, cybersecurity attack tactics, techniques, and countermeasures existing in space-craft are identified, and an understanding of the direction of application in the design and implementation of safe small satellites is provided.

Data Augmentation Techniques for Deep Learning-Based Medical Image Analyses (딥러닝 기반 의료영상 분석을 위한 데이터 증강 기법)

  • Mingyu Kim;Hyun-Jin Bae
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.6
    • /
    • pp.1290-1304
    • /
    • 2020
  • Medical image analyses have been widely used to differentiate normal and abnormal cases, detect lesions, segment organs, etc. Recently, owing to many breakthroughs in artificial intelligence techniques, medical image analyses based on deep learning have been actively studied. However, sufficient medical data are difficult to obtain, and data imbalance between classes hinder the improvement of deep learning performance. To resolve these issues, various studies have been performed, and data augmentation has been found to be a solution. In this review, we introduce data augmentation techniques, including image processing, such as rotation, shift, and intensity variation methods, generative adversarial network-based method, and image property mixing methods. Subsequently, we examine various deep learning studies based on data augmentation techniques. Finally, we discuss the necessity and future directions of data augmentation.

Analysis of methods for the model extraction without training data (학습 데이터가 없는 모델 탈취 방법에 대한 분석)

  • Hyun Kwon;Yonggi Kim;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.57-64
    • /
    • 2023
  • In this study, we analyzed how to steal the target model without training data. Input data is generated using the generative model, and a similar model is created by defining a loss function so that the predicted values of the target model and the similar model are close to each other. At this time, the target model has a process of learning so that the similar model is similar to it by gradient descent using the logit (logic) value of each class for the input data. The tensorflow machine learning library was used as an experimental environment, and CIFAR10 and SVHN were used as datasets. A similar model was created using the ResNet model as a target model. As a result of the experiment, it was found that the model stealing method generated a similar model with an accuracy of 86.18% for CIFAR10 and 96.02% for SVHN, producing similar predicted values to the target model. In addition, considerations on the model stealing method, military use, and limitations were also analyzed.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.