• Title/Summary/Keyword: CIFAR10

Search Result 54, Processing Time 0.032 seconds

Efficient Path Selection in Continuous Learning Environment (지속적 학습 환경에서 효율적 경로 선택)

  • Park, Seong-Hyeon;Kang, Seok-Hoon
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.412-419
    • /
    • 2021
  • In this paper, we propose a performance improvement of the LwF method using efficient path selection in Continuous Learning Environment. We compare performance and structure with conventional LwF. For comparison, we experiment with performance using MNIST, EMNIST, Fashion MNIST, and CIFAR10 data with different complexity configurations. Experiments show up to 20% improvement in accuracy for each task, which mitigating the Catastrophic Forgetting phenomenon in Continuous Learning environments.

Effects of Hyper-parameters and Dataset on CNN Training

  • Nguyen, Huu Nhan;Lee, Chanho
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.14-20
    • /
    • 2018
  • The purpose of training a convolutional neural network (CNN) is to obtain weight factors that give high classification accuracies. The initial values of hyper-parameters affect the training results, and it is important to train a CNN with a suitable hyper-parameter set of a learning rate, a batch size, the initialization of weight factors, and an optimizer. We investigate the effects of a single hyper-parameter while others are fixed in order to obtain a hyper-parameter set that gives higher classification accuracies and requires shorter training time using a proposed VGG-like CNN for training since the VGG is widely used. The CNN is trained for four datasets of CIFAR10, CIFAR100, GTSRB and DSDL-DB. The effects of the normalization and the data transformation for datasets are also investigated, and a training scheme using merged datasets is proposed.

Inter-Layer Kernel Prediction: Weight Sharing and Model Compression of Convolutional Neural Networks Motivated by Inter-frame Prediction (Inter-Layer Kernel Prediction: 프레임 간 Prediction에 기반한 컨볼루션 신경망 가중치 공유 및 모델 압축 방법)

  • Lee, Kang-Ho;Bae, Sung-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.136-139
    • /
    • 2020
  • 본 논문에서는 최근 대두되고 있는 심층신경망 압축 연구에서 가중치 공유와 관련하여 심층신경망 모델 압축방법 Inter-Layer Kernel Prediction을 제안한다. 제안 방법은 영상 압축에서 사용되는 프레임 간 prediction 방법을 응용한 컨볼루션 신경망 가중치 공유 및 모델 압축 방법이다. 본 논문은 레이어 간 유사한 kernel들이 존재한다는 것을 발견하고 이를 기반으로 Inter-Layer Kernel Prediction을 사용하여 기존 모델 가중치를 보다 더 적은 비트로 표현하여 저장하는 방법을 제안한다. 제안 방법은 CIFAR10/100으로 학습된 ResNet에서 약 4.1 배의 압축률을 달성했으며 CIFAR10으로 학습된 ResNet110에서는 오히려 기존 Baseline 모델에 비해 0.04%의 성능 향상을 기록했다.

  • PDF

Object Classification with Angular Margin Loss Function (각도 마진 손실 함수를 적용한 객체 분류)

  • Park, Seonji;Cho, Namik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.224-227
    • /
    • 2022
  • 객체 분류는 입력으로 주어진 이미지에 포함된 객체의 종류를 판단하는 기술이다. 대표적인 딥러닝 기반의 객체 분류 방법으로서 Faster R-CNN[2], YOLO[3] 등의 모델이 개발되었으나, 여전히 성능 향상의 여지가 있다. 본 연구에서는 각도 마진 손실 함수를 기존의 몇 가지 객채 분류 모델에 적용하여 성능 향상을 유도한다. 각도 마진 손실 함수는 얼굴 인식 모델인 SphereFace [4]에서 제안한 방법으로, 얼굴 인식과 같이 단일 도메인의 데이터셋을 분류하는 문제를 풀기 위해 제안되었다. 이는 기존 소프트맥스 함수에서 클래스 결정 경계선에 마진을 주는 방식으로 클래스 간의 구분 능력을 향상시킨다. 본 논문은 각도 마진 손실 함수를 CIFAR10, CIFAR100 데이터셋의 분류 문제에 적용하였으며 ResNet, EfficientNet, MobileNet 등의 백본 네트워크로 실험하여 평균적으로 mAP 성능이 향상되는 것을 확인하였다.

  • PDF

Improved Adapting a Single Network to Multiple Tasks By Bit Plane Slicing and Dithering (향상된 비트 평면 분할을 통한 다중 학습 통합 신경망 구축)

  • Bae, Joon-ki;Bae, Sung-ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.643-646
    • /
    • 2020
  • 본 논문에서는 직전 연구였던 비트 평면 분할과 디더링을 통한 다중 학습 통합 신경망 구축에서의 한계점을 분석하고, 향상시킨 방법을 제시한다. 통합 신경망을 구축하는 방법에 대해 최근까지 시도되었던 방법들은 신경망을 구성하는 가중치(weight)나 층(layer)를 공유하거나 태스크 별로 구분하는 것들이 있다. 이와 같은 선상에서 본 연구는 더 작은 단위인 가중치의 비트 평면을 태스크 별로 할당하여 보다 효율적인 통합 신경망을 구축한다. 실험은 이미지 분류 문제에 대해 수행하였다. 대중적인 신경망 구조인 ResNet18 에 대해 적용한 결과 데이터셋 CIFAR10 과 CIFAR100 에서 이론적인 압축률 50%를 달성하면서 성능 저하가 거의 발견되지 않았다.

  • PDF

A new lightweight network based on MobileNetV3

  • Zhao, Liquan;Wang, Leilei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.1-15
    • /
    • 2022
  • The MobileNetV3 is specially designed for mobile devices with limited memory and computing power. To reduce the network parameters and improve the network inference speed, a new lightweight network is proposed based on MobileNetV3. Firstly, to reduce the computation of residual blocks, a partial residual structure is designed by dividing the input feature maps into two parts. The designed partial residual structure is used to replace the residual block in MobileNetV3. Secondly, a dual-path feature extraction structure is designed to further reduce the computation of MobileNetV3. Different convolution kernel sizes are used in the two paths to extract feature maps with different sizes. Besides, a transition layer is also designed for fusing features to reduce the influence of the new structure on accuracy. The CIFAR-100 dataset and Image Net dataset are used to test the performance of the proposed partial residual structure. The ResNet based on the proposed partial residual structure has smaller parameters and FLOPs than the original ResNet. The performance of improved MobileNetV3 is tested on CIFAR-10, CIFAR-100 and ImageNet image classification task dataset. Comparing MobileNetV3, GhostNet and MobileNetV2, the improved MobileNetV3 has smaller parameters and FLOPs. Besides, the improved MobileNetV3 is also tested on CPU and Raspberry Pi. It is faster than other networks

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.

Layer-wise hint-based training for knowledge transfer in a teacher-student framework

  • Bae, Ji-Hoon;Yim, Junho;Kim, Nae-Soo;Pyo, Cheol-Sig;Kim, Junmo
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.242-253
    • /
    • 2019
  • We devise a layer-wise hint training method to improve the existing hint-based knowledge distillation (KD) training approach, which is employed for knowledge transfer in a teacher-student framework using a residual network (ResNet). To achieve this objective, the proposed method first iteratively trains the student ResNet and incrementally employs hint-based information extracted from the pretrained teacher ResNet containing several hint and guided layers. Next, typical softening factor-based KD training is performed using the previously estimated hint-based information. We compare the recognition accuracy of the proposed approach with that of KD training without hints, hint-based KD training, and ResNet-based layer-wise pretraining using reliable datasets, including CIFAR-10, CIFAR-100, and MNIST. When using the selected multiple hint-based information items and their layer-wise transfer in the proposed method, the trained student ResNet more accurately reflects the pretrained teacher ResNet's rich information than the baseline training methods, for all the benchmark datasets we consider in this study.

Training-Free Hardware-Aware Neural Architecture Search with Reinforcement Learning

  • Tran, Linh Tam;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.855-861
    • /
    • 2021
  • Neural Architecture Search (NAS) is cutting-edge technology in the machine learning community. NAS Without Training (NASWOT) recently has been proposed to tackle the high demand of computational resources in NAS by leveraging some indicators to predict the performance of architectures before training. The advantage of these indicators is that they do not require any training. Thus, NASWOT reduces the searching time and computational cost significantly. However, NASWOT only considers high-performing networks which does not guarantee a fast inference speed on hardware devices. In this paper, we propose a multi objectives reward function, which considers the network's latency and the predicted performance, and incorporate it into the Reinforcement Learning approach to search for the best networks with low latency. Unlike other methods, which use FLOPs to measure the latency that does not reflect the actual latency, we obtain the network's latency from the hardware NAS bench. We conduct extensive experiments on NAS-Bench-201 using CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets, and show that the proposed method is capable of generating the best network under latency constrained without training subnetworks.

Application and Performance Analysis of Double Pruning Method for Deep Neural Networks (심층신경망의 더블 프루닝 기법의 적용 및 성능 분석에 관한 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Oh, Seung-Yeon;Lee, Mun-Hyung;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.23-34
    • /
    • 2020
  • Recently, the artificial intelligence deep learning field has been hard to commercialize due to the high computing power and the price problem of computing resources. In this paper, we apply a double pruning techniques to evaluate the performance of the in-depth neural network and various datasets. Double pruning combines basic Network-slimming and Parameter-prunning. Our proposed technique has the advantage of reducing the parameters that are not important to the existing learning and improving the speed without compromising the learning accuracy. After training various datasets, the pruning ratio was increased to reduce the size of the model.We confirmed that MobileNet-V3 showed the highest performance as a result of NetScore performance analysis. We confirmed that the performance after pruning was the highest in MobileNet-V3 consisting of depthwise seperable convolution neural networks in the Cifar 10 dataset, and VGGNet and ResNet in traditional convolutional neural networks also increased significantly.