• Title/Summary/Keyword: convolutional autoencoder

Search Result 44, Processing Time 0.033 seconds

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.

A Noise-Tolerant Hierarchical Image Classification System based on Autoencoder Models (오토인코더 기반의 잡음에 강인한 계층적 이미지 분류 시스템)

  • Lee, Jong-kwan
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.23-30
    • /
    • 2021
  • This paper proposes a noise-tolerant image classification system using multiple autoencoders. The development of deep learning technology has dramatically improved the performance of image classifiers. However, if the images are contaminated by noise, the performance degrades rapidly. Noise added to the image is inevitably generated in the process of obtaining and transmitting the image. Therefore, in order to use the classifier in a real environment, we have to deal with the noise. On the other hand, the autoencoder is an artificial neural network model that is trained to have similar input and output values. If the input data is similar to the training data, the error between the input data and output data of the autoencoder will be small. However, if the input data is not similar to the training data, the error will be large. The proposed system uses the relationship between the input data and the output data of the autoencoder, and it has two phases to classify the images. In the first phase, the classes with the highest likelihood of classification are selected and subject to the procedure again in the second phase. For the performance analysis of the proposed system, classification accuracy was tested on a Gaussian noise-contaminated MNIST dataset. As a result of the experiment, it was confirmed that the proposed system in the noisy environment has higher accuracy than the CNN-based classification technique.

Chart-based Stock Price Prediction by Combing Variation Autoencoder and Attention Mechanisms (변이형 오토인코더와 어텐션 메커니즘을 결합한 차트기반 주가 예측)

  • Sanghyun Bae;Byounggu Choi
    • Information Systems Review
    • /
    • v.23 no.1
    • /
    • pp.23-43
    • /
    • 2021
  • Recently, many studies have been conducted to increase the accuracy of stock price prediction by analyzing candlestick charts using artificial intelligence techniques. However, these studies failed to consider the time-series characteristics of candlestick charts and to take into account the emotional state of market participants in data learning for stock price prediction. In order to overcome these limitations, this study produced input data by combining volatility index and candlestick charts to consider the emotional state of market participants, and used the data as input for a new method proposed on the basis of combining variantion autoencoder (VAE) and attention mechanisms for considering the time-series characteristics of candlestick chart. Fifty firms were randomly selected from the S&P 500 index and their stock prices were predicted to evaluate the performance of the method compared with existing ones such as convolutional neural network (CNN) or long-short term memory (LSTM). The results indicated the method proposed in this study showed superior performance compared to the existing ones. This study implied that the accuracy of stock price prediction could be improved by considering the emotional state of market participants and the time-series characteristics of the candlestick chart.

Application of deep convolutional neural network for short-term precipitation forecasting using weather radar-based images

  • Le, Xuan-Hien;Jung, Sungho;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.136-136
    • /
    • 2021
  • In this study, a deep convolutional neural network (DCNN) model is proposed for short-term precipitation forecasting using weather radar-based images. The DCNN model is a combination of convolutional neural networks, autoencoder neural networks, and U-net architecture. The weather radar-based image data used here are retrieved from competition for rainfall forecasting in Korea (AI Contest for Rainfall Prediction of Hydroelectric Dam Using Public Data), organized by Dacon under the sponsorship of the Korean Water Resources Association in October 2020. This data is collected from rainy events during the rainy season (April - October) from 2010 to 2017. These images have undergone a preprocessing step to convert from weather radar data to grayscale image data before they are exploited for the competition. Accordingly, each of these gray images covers a spatial dimension of 120×120 pixels and has a corresponding temporal resolution of 10 minutes. Here, each pixel corresponds to a grid of size 4km×4km. The DCNN model is designed in this study to provide 10-minute predictive images in advance. Then, precipitation information can be obtained from these forecast images through empirical conversion formulas. Model performance is assessed by comparing the Score index, which is defined based on the ratio of MAE (mean absolute error) to CSI (critical success index) values. The competition results have demonstrated the impressive performance of the DCNN model, where the Score value is 0.530 compared to the best value from the competition of 0.500, ranking 16th out of 463 participating teams. This study's findings exhibit the potential of applying the DCNN model to short-term rainfall prediction using weather radar-based images. As a result, this model can be applied to other areas with different spatiotemporal resolutions.

  • PDF

Pyramidal Deep Neural Networks for the Accurate Segmentation and Counting of Cells in Microscopy Data

  • Vununu, Caleb;Kang, Kyung-Won;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.335-348
    • /
    • 2019
  • Cell segmentation and counting represent one of the most important tasks required in order to provide an exhaustive understanding of biological images. Conventional features suffer the lack of spatial consistency by causing the joining of the cells and, thus, complicating the cell counting task. We propose, in this work, a cascade of networks that take as inputs different versions of the original image. After constructing a Gaussian pyramid representation of the microscopy data, the inputs of different size and spatial resolution are given to a cascade of deep convolutional autoencoders whose task is to reconstruct the segmentation mask. The coarse masks obtained from the different networks are summed up in order to provide the final mask. The principal and main contribution of this work is to propose a novel method for the cell counting. Unlike the majority of the methods that use the obtained segmentation mask as the prior information for counting, we propose to utilize the hidden latent representations, often called the high-level features, as the inputs of a neural network based regressor. While the segmentation part of our method performs as good as the conventional deep learning methods, the proposed cell counting approach outperforms the state-of-the-art methods.

Pixel level prediction of dynamic pressure distribution on hull surface based on convolutional neural network (합성곱 신경망 기반 선체 표면 압력 분포의 픽셀 수준 예측)

  • Kim, Dayeon;Seo, Jeongbeom;Lee, Inwon
    • Journal of the Korean Society of Visualization
    • /
    • v.20 no.2
    • /
    • pp.78-85
    • /
    • 2022
  • In these days, the rapid development in prediction technology using artificial intelligent is being applied in a variety of engineering fields. Especially, dimensionality reduction technologies such as autoencoder and convolutional neural network have enabled the classification and regression of high-dimensional data. In particular, pixel level prediction technology enables semantic segmentation (fine-grained classification), or physical value prediction for each pixel such as depth or surface normal estimation. In this study, the pressure distribution of the ship's surface was estimated at the pixel level based on the artificial neural network. First, a potential flow analysis was performed on the hull form data generated by transforming the baseline hull form data to construct 429 datasets for learning. Thereafter, a neural network with a U-shape structure was configured to learn the pressure value at the node position of the pretreated hull form. As a result, for the hull form included in training set, it was confirmed that the neural network can make a good prediction for pressure distribution. But in case of container ship, which is not included and have different characteristics, the network couldn't give a reasonable result.

A Model for Machine Fault Diagnosis based on Mutual Exclusion Theory and Out-of-Distribution Detection

  • Cui, Peng;Luo, Xuan;Liu, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2927-2941
    • /
    • 2022
  • The primary task of machine fault diagnosis is to judge whether the current state is normal or damaged, so it is a typical binary classification problem with mutual exclusion. Mutually exclusive events and out-of-domain detection have one thing in common: there are two types of data and no intersection. We proposed a fusion model method to improve the accuracy of machine fault diagnosis, which is based on the mutual exclusivity of events and the commonality of out-of-distribution detection, and finally generalized to all binary classification problems. It is reported that the performance of a convolutional neural network (CNN) will decrease as the recognition type increases, so the variational auto-encoder (VAE) is used as the primary model. Two VAE models are used to train the machine's normal and fault sound data. Two reconstruction probabilities will be obtained during the test. The smaller value is transformed into a correction value of another value according to the mutually exclusive characteristics. Finally, the classification result is obtained according to the fusion algorithm. Filtering normal data features from fault data features is proposed, which shields the interference and makes the fault features more prominent. We confirm that good performance improvements have been achieved in the machine fault detection data set, and the results are better than most mainstream models.

Vibration-based structural health monitoring using CAE-aided unsupervised deep learning

  • Minte, Zhang;Tong, Guo;Ruizhao, Zhu;Yueran, Zong;Zhihong, Pan
    • Smart Structures and Systems
    • /
    • v.30 no.6
    • /
    • pp.557-569
    • /
    • 2022
  • Vibration-based structural health monitoring (SHM) is crucial for the dynamic maintenance of civil building structures to protect property security and the lives of the public. Analyzing these vibrations with modern artificial intelligence and deep learning (DL) methods is a new trend. This paper proposed an unsupervised deep learning method based on a convolutional autoencoder (CAE), which can overcome the limitations of conventional supervised deep learning. With the convolutional core applied to the DL network, the method can extract features self-adaptively and efficiently. The effectiveness of the method in detecting damage is then tested using a benchmark model. Thereafter, this method is used to detect damage and instant disaster events in a rubber bearing-isolated gymnasium structure. The results indicate that the method enables the CAE network to learn the intact vibrations, so as to distinguish between different damage states of the benchmark model, and the outcome meets the high-dimensional data distribution characteristics visualized by the t-SNE method. Besides, the CAE-based network trained with daily vibrations of the isolating layer in the gymnasium can precisely recover newly collected vibration and detect the occurrence of the ground motion. The proposed method is effective at identifying nonlinear variations in the dynamic responses and has the potential to be used for structural condition assessment and safety warning.

Bias-correction of Dual Polarization Radar rainfall using Convolutional Autoencoder

  • Jung, Sungho;Le, Xuan Hien;Oh, Sungryul;Kim, Jeongyup;Lee, GiHa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.166-166
    • /
    • 2020
  • Recently, As the frequency of localized heavy rains increases, the use of high-resolution radar data is increasing. The produced radar rainfall has still gaps of spatial and temporal compared to gauge observation rainfall, and in many studies, various statistical techniques are performed for correct rainfall. In this study, the precipitation correction of the S-band Dual Polarization radar in use in the flood forecast was performed using the ConvAE algorithm, one of the Convolutional Neural Network. The ConvAE model was trained based on radar data sets having a 10-min temporal resolution: radar rainfall data, gauge rainfall data for 790minutes(July 2017 in Cheongju flood event). As a result of the validation of corrected radar rainfall were reduced gaps compared to gauge rainfall and the spatial correction was also performed. Therefore, it is judged that the corrected radar rainfall using ConvAE will increase the reliability of the gridded rainfall data used in various physically-based distributed hydrodynamic models.

  • PDF

2D Game Image Color Synthesis System Using Convolutional Neural Network (컨볼루션 인공신경망을 이용한 2차원 게임 이미지 색상 합성 시스템)

  • Hong, Seung Jin;Kang, Shin Jin;Cho, Sung Hyun
    • Journal of Korea Game Society
    • /
    • v.18 no.2
    • /
    • pp.89-98
    • /
    • 2018
  • The recent Neural Network technique has shown good performance in content generation such as image generation in addition to the conventional classification problem and clustering problem solving. In this study, we propose an image generation method using artificial neural network as a next generation content creation technique. The proposed artificial neural network model receives two images and combines them into a new image by taking color from one image and shape from the other image. This model is made up of Convolutional Neural Network, which has two encoders for extracting color and shape from images, and a decoder for taking all the values of each encoder and generating a combination image. The result of this work can be applied to various 2D image generation and modification works in game development process at low cost.