• Title/Summary/Keyword: Variational 모델 생성

Search Result 22, Processing Time 0.023 seconds

PCA-based Variational Model Composition Method for Roust Speech Recognition with Time-Varying Background Noise (시변 잡음에 강인한 음성 인식을 위한 PCA 기반의 Variational 모델 생성 기법)

  • Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2793-2799
    • /
    • 2013
  • This paper proposes an effective feature compensation method to improve speech recognition performance in time-varying background noise condition. The proposed method employs principal component analysis to improve the variational model composition method. The proposed method is employed to generate multiple environmental models for the PCGMM-based feature compensation scheme. Experimental results prove that the proposed scheme is more effective at improving speech recognition accuracy in various SNR conditions of background music, compared to the conventional front-end methods. It shows 12.14% of average relative improvement in WER compared to the previous variational model composition method.

Application of Improved Variational Recurrent Auto-Encoder for Korean Sentence Generation (한국어 문장 생성을 위한 Variational Recurrent Auto-Encoder 개선 및 활용)

  • Hahn, Sangchul;Hong, Seokjin;Choi, Heeyoul
    • Journal of KIISE
    • /
    • v.45 no.2
    • /
    • pp.157-164
    • /
    • 2018
  • Due to the revolutionary advances in deep learning, performance of pattern recognition has increased significantly in many applications like speech recognition and image recognition, and some systems outperform human-level intelligence in specific domains. Unlike pattern recognition, in this paper, we focus on generating Korean sentences based on a few Korean sentences. We apply variational recurrent auto-encoder (VRAE) and modify the model considering some characteristics of Korean sentences. To reduce the number of words in the model, we apply a word spacing model. Also, there are many Korean sentences which have the same meaning but different word order, even without subjects or objects; therefore we change the unidirectional encoder of VRAE into a bidirectional encoder. In addition, we apply an interpolation method on the encoded vectors from the given sentences, so that we can generate new sentences which are similar to the given sentences. In experiments, we confirm that our proposed method generates better sentences which are semantically more similar to the given sentences.

Variational Auto Encoder Distributed Restrictions for Image Generation (이미지 생성을 위한 변동 자동 인코더 분산 제약)

  • Yong-Gil Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.91-97
    • /
    • 2023
  • Recent research shows that latent directions can be used to image process towards certain attributes. However, controlling the generation process of generative model is very difficult. Though the latent directions are used to image process for certain attributes, many restrictions are required to enhance the attributes received the latent vectors according to certain text and prompts and other attributes largely unaffected. This study presents a generative model having certain restriction to the latent vectors for image generation and manipulation. The suggested method requires only few minutes per manipulation, and the simulation results through Tensorflow Variational Auto-encoder show the effectiveness of the suggested approach with extensive results.

Conditional Variational Autoencoder-based Generative Model for Gene Expression Data Augmentation (유전자 발현량 데이터 증대를 위한 Conditional VAE 기반 생성 모델)

  • Hyunsu Bong;Minsik Oh
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.275-284
    • /
    • 2023
  • Gene expression data can be utilized in various studies, including the prediction of disease prognosis. However, there are challenges associated with collecting enough data due to cost constraints. In this paper, we propose a gene expression data generation model based on Conditional Variational Autoencoder. Our results demonstrate that the proposed model generates synthetic data with superior quality compared to two other state-of-the-art models for gene expression data generation, namely the Wasserstein Generative Adversarial Network with Gradient Penalty based model and the structured data generation models CTGAN and TVAE.

A study on the application of residual vector quantization for vector quantized-variational autoencoder-based foley sound generation model (벡터 양자화 변분 오토인코더 기반의 폴리 음향 생성 모델을 위한 잔여 벡터 양자화 적용 연구)

  • Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.243-252
    • /
    • 2024
  • Among the Foley sound generation models that have recently begun to be studied, a sound generation technique using the Vector Quantized-Variational AutoEncoder (VQ-VAE) structure and generation model such as Pixelsnail are one of the important research subjects. On the other hand, in the field of deep learning-based acoustic signal compression, residual vector quantization technology is reported to be more suitable than the conventional VQ-VAE structure. Therefore, in this paper, we aim to study whether residual vector quantization technology can be effectively applied to the Foley sound generation. In order to tackle the problem, this paper applies the residual vector quantization technique to the conventional VQ-VAE-based Foley sound generation model, and in particular, derives a model that is compatible with the existing models such as Pixelsnail and does not increase computational resource consumption. In order to evaluate the model, an experiment was conducted using DCASE2023 Task7 data. The results show that the proposed model enhances about 0.3 of the Fréchet audio distance. Unfortunately, the performance enhancement was limited, which is believed to be due to the decrease in the resolution of time-frequency domains in order to do not increase consumption of the computational resources.

De Novo Drug Design Using Self-Attention Based Variational Autoencoder (Self-Attention 기반의 변분 오토인코더를 활용한 신약 디자인)

  • Piao, Shengmin;Choi, Jonghwan;Seo, Sangmin;Kim, Kyeonghun;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.11-18
    • /
    • 2022
  • De novo drug design is the process of developing new drugs that can interact with biological targets such as protein receptors. Traditional process of de novo drug design consists of drug candidate discovery and drug development, but it requires a long time of more than 10 years to develop a new drug. Deep learning-based methods are being studied to shorten this period and efficiently find chemical compounds for new drug candidates. Many existing deep learning-based drug design models utilize recurrent neural networks to generate a chemical entity represented by SMILES strings, but due to the disadvantages of the recurrent networks, such as slow training speed and poor understanding of complex molecular formula rules, there is room for improvement. To overcome these shortcomings, we propose a deep learning model for SMILES string generation using variational autoencoders with self-attention mechanism. Our proposed model decreased the training time by 1/26 compared to the latest drug design model, as well as generated valid SMILES more effectively.

A Study on Classification of Variant Malware Family Based on ResNet-Variational AutoEncoder (ResNet-Variational AutoEncoder기반 변종 악성코드 패밀리 분류 연구)

  • Lee, Young-jeon;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.1-9
    • /
    • 2021
  • Traditionally, most malicious codes have been analyzed using feature information extracted by domain experts. However, this feature-based analysis method depends on the analyst's capabilities and has limitations in detecting variant malicious codes that have modified existing malicious codes. In this study, we propose a ResNet-Variational AutoEncder-based variant malware classification method that can classify a family of variant malware without domain expert intervention. The Variational AutoEncoder network has the characteristics of creating new data within a normal distribution and understanding the characteristics of the data well in the learning process of training data provided as input values. In this study, important features of malicious code could be extracted by extracting latent variables in the learning process of Variational AutoEncoder. In addition, transfer learning was performed to better learn the characteristics of the training data and increase the efficiency of learning. The learning parameters of the ResNet-152 model pre-trained with the ImageNet Dataset were transferred to the learning parameters of the Encoder Network. The ResNet-Variational AutoEncoder that performed transfer learning showed higher performance than the existing Variational AutoEncoder and provided learning efficiency. Meanwhile, an ensemble model, Stacking Classifier, was used as a method for classifying variant malicious codes. As a result of learning the Stacking Classifier based on the characteristic data of the variant malware extracted by the Encoder Network of the ResNet-VAE model, an accuracy of 98.66% and an F1-Score of 98.68 were obtained.

Comparative Analysis of Image Generation Models for Waste Recognition Improvement (폐기물 분류 개선을 위한 이미지 생성 모델 비교 분석)

  • Jun Hyeok Go;Jeong Hyeon Park;Siung Kim;Nammee Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.639-641
    • /
    • 2023
  • 이미지 기반 폐기물 처리시스템에서 품목별 상이한 수집 난이도로 인해 발생하는 데이터 불균형으로 분류 모델 학습에 어려움이 따른다. 따라서 본 논문에서는 폐기물 분류 모델의 성능 비교를 통해 적합한 이미지 생성 모델을 탐색한다. 데이터의 불균형을 해결할 수 있도록 VAE(Variational Auto-Encoder), GAN(Generative Adversarial Networks) 및 Diffusion Model을 이용하여 이미지를 생성한다. 이후 각각의 생성 방법에 따라 학습데이터와 병합하여 객체 분류를 진행하였다. 정확도는 VAE가 84.41%로 3.3%의 성능 향상을, F1-점수는 Diffusion Model이 91.94%로 6.14%의 성능 향상을 이루었다. 이를 통해, 데이터 수집에서 나타나는 데이터 불균형을 해결하여 실 사용환경에 알맞은 시스템을 구축이 가능함을 확인하였다.

Self-Attention-based SMILES Generationfor De Novo Drug Design (신약 디자인을 위한 Self-Attention 기반의 SMILES 생성자)

  • PIAO, SHENGMIN;Choi, Jonghwan;Kim, Kyeonghun;Park, Sanghyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.343-346
    • /
    • 2021
  • 약물 디자인이란 단백질과 같은 생물학적 표적에 작용할 수 있는 새로운 약물을 개발하는 과정이다. 전통적인 방법은 탐색과 개발 단계로 구성되어 있으나, 하나의 신약 개발을 위해서는 10 년 이상의 장시간이 요구되기 때문에, 이러한 기간을 단축하기 위한 인공지능 기반의 약물 디자인 방법들이 개발되고 있다. 하지만 많은 심층학습 기반의 약물 디자인 모델들은 RNN 기법을 활용하고 있고, RNN 은 훈련속도가 느리다는 단점이 있기 때문에 개선의 여지가 남아있다. 이런 단점을 극복하기 위해 본 연구는 self-attention 과 variational autoencoder 를 활용한 SMILES 생성 모델을 제안한다. 제안된 모델은 최신 약물 디자인 모델 대비 훈련 시간을 1/36 단축하고, 뿐만 아니라 유효한 SMILES 를 더 많이 생성하는 것을 확인하였다.

Motion Style Transfer using Variational Autoencoder (변형 자동 인코더를 활용한 모션 스타일 이전)

  • Ahn, Jewon;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.33-43
    • /
    • 2021
  • In this paper, we propose a framework that transfers the information of style motions to content motions based on a variational autoencoder network combined with a style encoding in the latent space. Because we transfer a style to a content motion that is sampled from a variational autoencoder, we can increase the diversity of existing motion data. In addition, we can improve the unnatural motions caused by decoding a new latent variable from style transfer. That improvement was achieved by additionally using the velocity information of motions when generating next frames.