• Title/Summary/Keyword: MNIST image dataset

Search Result 19, Processing Time 0.028 seconds

BEGINNER'S GUIDE TO NEURAL NETWORKS FOR THE MNIST DATASET USING MATLAB

  • Kim, Bitna;Park, Young Ho
    • Korean Journal of Mathematics
    • /
    • v.26 no.2
    • /
    • pp.337-348
    • /
    • 2018
  • MNIST dataset is a database containing images of handwritten digits, with each image labeled by an integer from 0 to 9. It is used to benchmark the performance of machine learning algorithms. Neural networks for MNIST are regarded as the starting point of the studying machine learning algorithms. However it is not easy to start the actual programming. In this expository article, we will give a step-by-step instruction to build neural networks for MNIST dataset using MATLAB.

Fashion Clothing Image Classification Deep Learning (패션 의류 영상 분류 딥러닝)

  • Shin, Seong-Yoon;Wang, Guangxing;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.676-677
    • /
    • 2022
  • In this paper, we propose a new method based on a deep learning model with an optimized dynamic decay learning rate and improved model structure to achieve fast and accurate classification of fashion clothing images. Experiments are performed using the model proposed in the Fashion-MNIST dataset and compared with methods of CNN, LeNet, LSTM and BiLSTM.

  • PDF

Introduction to convolutional neural network using Keras; an understanding from a statistician

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.6
    • /
    • pp.591-610
    • /
    • 2019
  • Deep Learning is one of the machine learning methods to find features from a huge data using non-linear transformation. It is now commonly used for supervised learning in many fields. In particular, Convolutional Neural Network (CNN) is the best technique for the image classification since 2012. For users who consider deep learning models for real-world applications, Keras is a popular API for neural networks written in Python and also can be used in R. We try examine the parameter estimation procedures of Deep Neural Network and structures of CNN models from basics to advanced techniques. We also try to figure out some crucial steps in CNN that can improve image classification performance in the CIFAR10 dataset using Keras. We found that several stacks of convolutional layers and batch normalization could improve prediction performance. We also compared image classification performances with other machine learning methods, including K-Nearest Neighbors (K-NN), Random Forest, and XGBoost, in both MNIST and CIFAR10 dataset.

Security Vulnerability Verification for Open Deep Learning Libraries (공개 딥러닝 라이브러리에 대한 보안 취약성 검증)

  • Jeong, JaeHan;Shon, Taeshik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.1
    • /
    • pp.117-125
    • /
    • 2019
  • Deep Learning, which is being used in various fields recently, is being threatened with Adversarial Attack. In this paper, we experimentally verify that the classification accuracy is lowered by adversarial samples generated by malicious attackers in image classification models. We used MNIST dataset and measured the detection accuracy by injecting adversarial samples into the Autoencoder classification model and the CNN (Convolution neural network) classification model, which are created using the Tensorflow library and the Pytorch library. Adversarial samples were generated by transforming MNIST test dataset with JSMA(Jacobian-based Saliency Map Attack) and FGSM(Fast Gradient Sign Method). When injected into the classification model, detection accuracy decreased by at least 21.82% up to 39.08%.

Comparison of Spatial and Frequency Images for Character Recognition (문자인식을 위한 공간 및 주파수 도메인 영상의 비교)

  • Abdurakhmon, Abduraimjonov;Choi, Hyeon-yeong;Ko, Jaepil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.439-441
    • /
    • 2019
  • Deep learning has become a powerful and robust algorithm in Artificial Intelligence. One of the most impressive forms of Deep learning tools is that of the Convolutional Neural Networks (CNN). CNN is a state-of-the-art solution for object recognition. For instance when we utilize CNN with MNIST handwritten digital dataset, mostly the result is well. Because, in MNIST dataset, all digits are centralized. Unfortunately, the real world is different from our imagination. If digits are shifted from the center, it becomes a big issue for CNN to recognize and provide result like before. To solve that issue, we have created frequency images from spatial images by a Fast Fourier Transform (FFT).

  • PDF

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Anomaly Detection of Generative Adversarial Networks considering Quality and Distortion of Images (이미지의 질과 왜곡을 고려한 적대적 생성 신경망과 이를 이용한 비정상 검출)

  • Seo, Tae-Moon;Kang, Min-Guk;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.3
    • /
    • pp.171-179
    • /
    • 2020
  • Recently, studies have shown that convolution neural networks are achieving the best performance in image classification, object detection, and image generation. Vision based defect inspection which is more economical than other defect inspection, is a very important for a factory automation. Although supervised anomaly detection algorithm has far exceeded the performance of traditional machine learning based method, it is inefficient for real industrial field due to its tedious annotation work, In this paper, we propose ADGAN, a unsupervised anomaly detection architecture using the variational autoencoder and the generative adversarial network which give great results in image generation task, and demonstrate whether the proposed network architecture identifies anomalous images well on MNIST benchmark dataset as well as our own welding defect dataset.

Comparative Analysis of CNN Techniques designed for Rotated Object Classifiation (회전된 객체 분류를 위한 CNN 기법들의 성능 비교 분석)

  • Hee-Il Hahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.181-187
    • /
    • 2024
  • There are two kinds of well-known CNN methods, the group equivariant CNN and the CNN using steerable filters, which have excellent classification performances for randomly rotated objects in image space. This paper describes their mathematical structures and introduces implementation methods. We implement them, including the existing CNN, which have the same number of filters, then compare and analyze their performances by simulating them with the randomly rotated MNIST. According to the experimental results, the steerable CNN, which shows a classification improvement over the others, has a relatively small number of parameters to learn, so performance degradation is relatively small even when the size of the training dataset is reduced.

Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset

  • Peng, Sony;Yang, Yixuan;Mao, Makara;Park, Doo-Soon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.742-756
    • /
    • 2022
  • A flood of information has occurred with the rise of the internet and digital devices in the fourth industrial revolution era. Every millisecond, massive amounts of structured and unstructured data are generated; smartphones, wearable devices, sensors, and self-driving cars are just a few examples of devices that currently generate massive amounts of data in our daily. Machine learning has been considered an approach to support and recognize patterns in data in many areas to provide a convenient way to other sectors, including the healthcare sector, government sector, banks, military sector, and more. However, the conventional machine learning model requires the data owner to upload their information to train the model in one central location to perform the model training. This classical model has caused data owners to worry about the risks of transferring private information because traditional machine learning is required to push their data to the cloud to process the model training. Furthermore, the training of machine learning and deep learning models requires massive computing resources. Thus, many researchers have jumped to a new model known as "Federated Learning". Federated learning is emerging to train Artificial Intelligence models over distributed clients, and it provides secure privacy information to the data owner. Hence, this paper implements Federated Averaging with a Deep Neural Network to classify the handwriting image and protect the sensitive data. Moreover, we compare the centralized machine learning model with federated averaging. The result shows the centralized machine learning model outperforms federated learning in terms of accuracy, but this classical model produces another risk, like privacy concern, due to the data being stored in the data center. The MNIST dataset was used in this experiment.

Implementation of handwritten digit recognition CNN structure using GPGPU and Combined Layer (GPGPU와 Combined Layer를 이용한 필기체 숫자인식 CNN구조 구현)

  • Lee, Sangil;Nam, Kihun;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.165-169
    • /
    • 2017
  • CNN(Convolutional Nerual Network) is one of the algorithms that show superior performance in image recognition and classification among machine learning algorithms. CNN is simple, but it has a large amount of computation and it takes a lot of time. Consequently, in this paper we performed an parallel processing unit for the convolution layer, pooling layer and the fully connected layer, which consumes a lot of handling time in the process of CNN, through the SIMT(Single Instruction Multiple Thread)'s structure of GPGPU(General-Purpose computing on Graphics Processing Units).And we also expect to improve performance by reducing the number of memory accesses and directly using the output of convolution layer not storing it in pooling layer. In this paper, we use MNIST dataset to verify this experiment and confirm that the proposed CNN structure is 12.38% better than existing structure.