• Title/Summary/Keyword: generative models

Search Result 152, Processing Time 0.021 seconds

Generating Synthetic Raman Spectra of DMMP and 2-CEES by Mathematical Transforms and Deep Generative Models (수학적 변환과 심층 생성 모델을 활용한 DMMP와 2-CEES의 모의 라만 분광 생성)

  • Sungwon Park;Boseong Jeong;Hongjoong Kim
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.5
    • /
    • pp.422-430
    • /
    • 2023
  • To build an automated system detecting toxic chemicals from Raman spectra, we have to obtain sufficient data of toxic chemicals. However, it usually costs high to gather Raman spectra of toxic chemicals in diverse situations. Tackling this problem, we develop methods to generate synthetic Raman spectra of DMMP and 2-CEES without actual experiments. First, we propose certain mathematical transforms to augment few original Raman spectra. Then, we train deep generative models to generate more realistic and diverse data. Analyzing synthetic Raman spectra of toxic chemicals generated by our methods through visualization, we qualitatively verify that the data are sufficiently similar to original data and diverse. For conclusion, we obtain a synthetic dataset of DMMP and 2-CEES with the proposed algorithm.

Entity Embeddings for Enhancing Feasible and Diverse Population Synthesis in a Deep Generative Models (심층 생성모델 기반 합성인구 생성 성능 향상을 위한 개체 임베딩 분석연구)

  • Donghyun Kwon;Taeho Oh;Seungmo Yoo;Heechan Kang
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.6
    • /
    • pp.17-31
    • /
    • 2023
  • An activity-based model requires detailed population information to model individual travel behavior in a disaggregated manner. The recent innovative approach developed deep generative models with novel regularization terms that improves fidelity and diversity for population synthesis. Since the method relies on measuring the distance between distribution boundaries of the sample data and the generated sample, it is crucial to obtain well-defined continuous representation from the discretized dataset. Therefore, we propose an improved entity embedding models to enhance the performance of the regularization terms, which indirectly supports the synthesis in terms of feasible and diverse populations. Our results show a 28.87% improvement in the F1 score compared to the baseline method.

Understanding of Generative Artificial Intelligence Based on Textual Data and Discussion for Its Application in Science Education (텍스트 기반 생성형 인공지능의 이해와 과학교육에서의 활용에 대한 논의)

  • Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.307-319
    • /
    • 2023
  • This study aims to explain the key concepts and principles of text-based generative artificial intelligence (AI) that has been receiving increasing interest and utilization, focusing on its application in science education. It also highlights the potential and limitations of utilizing generative AI in science education, providing insights for its implementation and research aspects. Recent advancements in generative AI, predominantly based on transformer models consisting of encoders and decoders, have shown remarkable progress through optimization of reinforcement learning and reward models using human feedback, as well as understanding context. Particularly, it can perform various functions such as writing, summarizing, keyword extraction, evaluation, and feedback based on the ability to understand various user questions and intents. It also offers practical utility in diagnosing learners and structuring educational content based on provided examples by educators. However, it is necessary to examine the concerns regarding the limitations of generative AI, including the potential for conveying inaccurate facts or knowledge, bias resulting from overconfidence, and uncertainties regarding its impact on user attitudes or emotions. Moreover, the responses provided by generative AI are probabilistic based on response data from many individuals, which raises concerns about limiting insightful and innovative thinking that may offer different perspectives or ideas. In light of these considerations, this study provides practical suggestions for the positive utilization of AI in science education.

Radar-based rainfall prediction using generative adversarial network (적대적 생성 신경망을 이용한 레이더 기반 초단시간 강우예측)

  • Yoon, Seongsim;Shin, Hongjoon;Heo, Jae-Yeong
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.8
    • /
    • pp.471-484
    • /
    • 2023
  • Deep learning models based on generative adversarial neural networks are specialized in generating new information based on learned information. The deep generative models (DGMR) model developed by Google DeepMind is an generative adversarial neural network model that generates predictive radar images by learning complex patterns and relationships in large-scale radar image data. In this study, the DGMR model was trained using radar rainfall observation data from the Ministry of Environment, and rainfall prediction was performed using an generative adversarial neural network for a heavy rainfall case in August 2021, and the accuracy was compared with existing prediction techniques. The DGMR generally resembled the observed rainfall in terms of rainfall distribution in the first 60 minutes, but tended to predict a continuous development of rainfall in cases where strong rainfall occurred over the entire area. Statistical evaluation also showed that the DGMR method is an effective rainfall prediction method compared to other methods, with a critical success index of 0.57 to 0.79 and a mean absolute error of 0.57 to 1.36 mm in 1 hour advance prediction. However, the lack of diversity in the generated results sometimes reduces the prediction accuracy, so it is necessary to improve the diversity and to supplement it with rainfall data predicted by a physics-based numerical forecast model to improve the accuracy of the forecast for more than 2 hours in advance.

Enhancing Gene Expression Classification of Support Vector Machines with Generative Adversarial Networks

  • Huynh, Phuoc-Hai;Nguyen, Van Hoa;Do, Thanh-Nghi
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.14-20
    • /
    • 2019
  • Currently, microarray gene expression data take advantage of the sufficient classification of cancers, which addresses the problems relating to cancer causes and treatment regimens. However, the sample size of gene expression data is often restricted, because the price of microarray technology on studies in humans is high. We propose enhancing the gene expression classification of support vector machines with generative adversarial networks (GAN-SVMs). A GAN that generates new data from original training datasets was implemented. The GAN was used in conjunction with nonlinear SVMs that efficiently classify gene expression data. Numerical test results on 20 low-sample-size and very high-dimensional microarray gene expression datasets from the Kent Ridge Biomedical and Array Expression repositories indicate that the model is more accurate than state-of-the-art classifying models.

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

Single Image Dehazing: An Analysis on Generative Adversarial Network

  • Amina Khatun;Mohammad Reduanul Haque;Rabeya Basri;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.136-142
    • /
    • 2024
  • Haze is a very common phenomenon that degrades or reduces the visibility. It causes various problems where high quality images are required such as traffic and security monitoring. So haze removal from images receives great attention for clear vision. Due to its huge impact, significant advances have been achieved but the task yet remains a challenging one. Recently, different types of deep generative adversarial networks (GAN) are applied to suppress the noise and improve the dehazing performance. But it is unclear how these algorithms would perform on hazy images acquired "in the wild" and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and experimental evaluation on diverse GAN models in single image dehazing through benchmark datasets.

Assessment and Analysis of Fidelity and Diversity for GAN-based Medical Image Generative Model (GAN 기반 의료영상 생성 모델에 대한 품질 및 다양성 평가 및 분석)

  • Jang, Yoojin;Yoo, Jaejun;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.11-19
    • /
    • 2022
  • Recently, various researches on medical image generation have been suggested, and it becomes crucial to accurately evaluate the quality and diversity of the generated medical images. For this purpose, the expert's visual turing test, feature distribution visualization, and quantitative evaluation through IS and FID are evaluated. However, there are few methods for quantitatively evaluating medical images in terms of fidelity and diversity. In this paper, images are generated by learning a chest CT dataset of non-small cell lung cancer patients through DCGAN and PGGAN generative models, and the performance of the two generative models are evaluated in terms of fidelity and diversity. The performance is quantitatively evaluated through IS and FID, which are one-dimensional score-based evaluation methods, and Precision and Recall, Improved Precision and Recall, which are two-dimensional score-based evaluation methods, and the characteristics and limitations of each evaluation method are also analyzed in medical imaging.

Research on AI Painting Generation Technology Based on the [Stable Diffusion]

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.90-95
    • /
    • 2023
  • With the rapid development of deep learning and artificial intelligence, generative models have achieved remarkable success in the field of image generation. By combining the stable diffusion method with Web UI technology, a novel solution is provided for the application of AI painting generation. The application prospects of this technology are very broad and can be applied to multiple fields, such as digital art, concept design, game development, and more. Furthermore, the platform based on Web UI facilitates user operations, making the technology more easily applicable to practical scenarios. This paper introduces the basic principles of Stable Diffusion Web UI technology. This technique utilizes the stability of diffusion processes to improve the output quality of generative models. By gradually introducing noise during the generation process, the model can generate smoother and more coherent images. Additionally, the analysis of different model types and applications within Stable Diffusion Web UI provides creators with a more comprehensive understanding, offering valuable insights for fields such as artistic creation and design.

Morpho-GAN: Unsupervised Learning of Data with High Morphology using Generative Adversarial Networks (Morpho-GAN: Generative Adversarial Networks를 사용하여 높은 형태론 데이터에 대한 비지도학습)

  • Abduazimov, Azamat;Jo, GeunSik
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.11-14
    • /
    • 2020
  • The importance of data in the development of deep learning is very high. Data with high morphological features are usually utilized in the domains where careful lens calibrations are needed by a human to capture those data. Synthesis of high morphological data for that domain can be a great asset to improve the classification accuracy of systems in the field. Unsupervised learning can be employed for this task. Generating photo-realistic objects of interest has been massively studied after Generative Adversarial Network (GAN) was introduced. In this paper, we propose Morpho-GAN, a method that unifies several GAN techniques to generate quality data of high morphology. Our method introduces a new suitable training objective in the discriminator of GAN to synthesize images that follow the distribution of the original dataset. The results demonstrate that the proposed method can generate plausible data as good as other modern baseline models while taking a less complex during training.

  • PDF