• Title/Summary/Keyword: CIFAR10

Search Result 53, Processing Time 0.026 seconds

Initialization by using truncated distributions in artificial neural network (절단된 분포를 이용한 인공신경망에서의 초기값 설정방법)

  • Kim, MinJong;Cho, Sungchul;Jeong, Hyerin;Lee, YungSeop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.693-702
    • /
    • 2019
  • Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.

An Adversarial Attack Type Classification Method Using Linear Discriminant Analysis and k-means Algorithm (선형 판별 분석 및 k-means 알고리즘을 이용한 적대적 공격 유형 분류 방안)

  • Choi, Seok-Hwan;Kim, Hyeong-Geon;Choi, Yoon-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1215-1225
    • /
    • 2021
  • Although Artificial Intelligence (AI) techniques have shown impressive performance in various fields, they are vulnerable to adversarial examples which induce misclassification by adding human-imperceptible perturbations to the input. Previous studies to defend the adversarial examples can be classified into three categories: (1) model retraining methods; (2) input transformation methods; and (3) adversarial examples detection methods. However, even though the defense methods against adversarial examples have constantly been proposed, there is no research to classify the type of adversarial attack. In this paper, we proposed an adversarial attack family classification method based on dimensionality reduction and clustering. Specifically, after extracting adversarial perturbation from adversarial example, we performed Linear Discriminant Analysis (LDA) to reduce the dimensionality of adversarial perturbation and performed K-means algorithm to classify the type of adversarial attack family. From the experimental results using MNIST dataset and CIFAR-10 dataset, we show that the proposed method can efficiently classify five tyeps of adversarial attack(FGSM, BIM, PGD, DeepFool, C&W). We also show that the proposed method provides good classification performance even in a situation where the legitimate input to the adversarial example is unknown.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

FAST-ADAM in Semi-Supervised Generative Adversarial Networks

  • Kun, Li;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.31-36
    • /
    • 2019
  • Unsupervised neural networks have not caught enough attention until Generative Adversarial Network (GAN) was proposed. By using both the generator and discriminator networks, GAN can extract the main characteristic of the original dataset and produce new data with similarlatent statistics. However, researchers understand fully that training GAN is not easy because of its unstable condition. The discriminator usually performs too good when helping the generator to learn statistics of the training datasets. Thus, the generated data is not compelling. Various research have focused on how to improve the stability and classification accuracy of GAN. However, few studies delve into how to improve the training efficiency and to save training time. In this paper, we propose a novel optimizer, named FAST-ADAM, which integrates the Lookahead to ADAM optimizer to train the generator of a semi-supervised generative adversarial network (SSGAN). We experiment to assess the feasibility and performance of our optimizer using Canadian Institute For Advanced Research - 10 (CIFAR-10) benchmark dataset. From the experiment results, we show that FAST-ADAM can help the generator to reach convergence faster than the original ADAM while maintaining comparable training accuracy results.

Animal Fur Recognition Algorithm Based on Feature Fusion Network

  • Liu, Peng;Lei, Tao;Xiang, Qian;Wang, Zexuan;Wang, Jiwei
    • Journal of Multimedia Information System
    • /
    • v.9 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • China is a big country in animal fur industry. The total production and consumption of fur are increasing year by year. However, the recognition of fur in the fur production process still mainly relies on the visual identification of skilled workers, and the stability and consistency of products cannot be guaranteed. In response to this problem, this paper proposes a feature fusion-based animal fur recognition network on the basis of typical convolutional neural network structure, relying on rapidly developing deep learning techniques. This network superimposes texture feature - the most prominent feature of fur image - into the channel dimension of input image. The output feature map of the first layer convolution is inverted to obtain the inverted feature map and concat it into the original output feature map, then Leaky ReLU is used for activation, which makes full use of the texture information of fur image and the inverted feature information. Experimental results show that the algorithm improves the recognition accuracy by 9.08% on Fur_Recognition dataset and 6.41% on CIFAR-10 dataset. The algorithm in this paper can change the current situation that fur recognition relies on manual visual method to classify, and can lay foundation for improving the efficiency of fur production technology.

Parameter-Efficient Neural Networks Using Template Reuse (템플릿 재사용을 통한 패러미터 효율적 신경망 네트워크)

  • Kim, Daeyeon;Kang, Woochul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.169-176
    • /
    • 2020
  • Recently, deep neural networks (DNNs) have brought revolutions to many mobile and embedded devices by providing human-level machine intelligence for various applications. However, high inference accuracy of such DNNs comes at high computational costs, and, hence, there have been significant efforts to reduce computational overheads of DNNs either by compressing off-the-shelf models or by designing a new small footprint DNN architecture tailored to resource constrained devices. One notable recent paradigm in designing small footprint DNN models is sharing parameters in several layers. However, in previous approaches, the parameter-sharing techniques have been applied to large deep networks, such as ResNet, that are known to have high redundancy. In this paper, we propose a parameter-sharing method for already parameter-efficient small networks such as ShuffleNetV2. In our approach, small templates are combined with small layer-specific parameters to generate weights. Our experiment results on ImageNet and CIFAR100 datasets show that our approach can reduce the size of parameters by 15%-35% of ShuffleNetV2 while achieving smaller drops in accuracies compared to previous parameter-sharing and pruning approaches. We further show that the proposed approach is efficient in terms of latency and energy consumption on modern embedded devices.

CNN Applied Modified Residual Block Structure (변형된 잔차블록을 적용한 CNN)

  • Kwak, Nae-Joung;Shin, Hyeon-Jun;Yang, Jong-Seop;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.803-811
    • /
    • 2020
  • This paper proposes an image classification algorithm that transforms the number of convolution layers in the residual block of ResNet, CNN's representative method. The proposed method modified the structure of 34/50 layer of ResNet structure. First, we analyzed the performance of small and many convolution layers for the structure consisting of only shortcut and 3 × 3 convolution layers for 34 and 50 layers. And then the performance was analyzed in the case of small and many cases of convolutional layers for the bottleneck structure of 50 layers. By applying the results, the best classification method in the residual block was applied to construct a 34-layer simple structure and a 50-layer bottleneck image classification model. To evaluate the performance of the proposed image classification model, the results were analyzed by applying to the cifar10 dataset. The proposed 34-layer simple structure and 50-layer bottleneck showed improved performance over the ResNet-110 and Densnet-40 models.

Removing Out - Of - Distribution Samples on Classification Task

  • Dang, Thanh-Vu;Vo, Hoang-Trong;Yu, Gwang-Hyun;Lee, Ju-Hwan;Nguyen, Huy-Toan;Kim, Jin-Young
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.80-89
    • /
    • 2020
  • Out - of - distribution (OOD) samples are frequently encountered when deploying a classification model in plenty of real-world machine learning-based applications. Those samples are normally sampling far away from the training distribution, but many classifiers still assign them high reliability to belong to one of the training categories. In this study, we address the problem of removing OOD examples by estimating marginal density estimation using variational autoencoder (VAE). We also investigate other proper methods, such as temperature scaling, Gaussian discrimination analysis, and label smoothing. We use Chonnam National University (CNU) weeds dataset as the in - distribution dataset and CIFAR-10, CalTeach as the OOD datasets. Quantitative results show that the proposed framework can reject the OOD test samples with a suitable threshold.

Discriminative Manifold Learning Network using Adversarial Examples for Image Classification

  • Zhang, Yuan;Shi, Biming
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2099-2106
    • /
    • 2018
  • This study presents a novel approach of discriminative feature vectors based on manifold learning using nonlinear dimension reduction (DR) technique to improve loss function, and combine with the Adversarial examples to regularize the object function for image classification. The traditional convolutional neural networks (CNN) with many new regularization approach has been successfully used for image classification tasks, and it achieved good results, hence it costs a lot of Calculated spacing and timing. Significantly, distrinct from traditional CNN, we discriminate the feature vectors for objects without empirically-tuned parameter, these Discriminative features intend to remain the lower-dimensional relationship corresponding high-dimension manifold after projecting the image feature vectors from high-dimension to lower-dimension, and we optimize the constrains of the preserving local features based on manifold, which narrow the mapped feature information from the same class and push different class away. Using Adversarial examples, improved loss function with additional regularization term intends to boost the Robustness and generalization of neural network. experimental results indicate that the approach based on discriminative feature of manifold learning is not only valid, but also more efficient in image classification tasks. Furthermore, the proposed approach achieves competitive classification performances for three benchmark datasets : MNIST, CIFAR-10, SVHN.

Filter Contribution Recycle: Boosting Model Pruning with Small Norm Filters

  • Chen, Zehong;Xie, Zhonghua;Wang, Zhen;Xu, Tao;Zhang, Zhengrui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3507-3522
    • /
    • 2022
  • Model pruning methods have attracted huge attention owing to the increasing demand of deploying models on low-resource devices recently. Most existing methods use the weight norm of filters to represent their importance, and discard the ones with small value directly to achieve the pruning target, which ignores the contribution of the small norm filters. This is not only results in filter contribution waste, but also gives comparable performance to training with the random initialized weights [1]. In this paper, we point out that the small norm filters can harm the performance of the pruned model greatly, if they are discarded directly. Therefore, we propose a novel filter contribution recycle (FCR) method for structured model pruning to resolve the fore-mentioned problem. FCR collects and reassembles contribution from the small norm filters to obtain a mixed contribution collector, and then assigns the reassembled contribution to other filters with higher probability to be preserved. To achieve the target FLOPs, FCR also adopts a weight decay strategy for the small norm filters. To explore the effectiveness of our approach, extensive experiments are conducted on ImageNet2012 and CIFAR-10 datasets, and superior results are reported when comparing with other methods under the same or even more FLOPs reduction. In addition, our method is flexible to be combined with other different pruning criterions.