• Title/Summary/Keyword: Neural Network Pruning

Search Result 44, Processing Time 0.025 seconds

Pattern Analysis of Organizational Leader Using Fuzzy TAM Network (퍼지TAM 네트워크를 이용한 조직리더의 패턴분석)

  • Park, Soo-Jeom;Hwang, Seung-Gook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.238-243
    • /
    • 2007
  • The TAM(Topographic Attentive Mapping) network neural network model is an especially effective one for pattern analysis. It is composed of of Input layer, category layer, and output layer. Fuzzy rule, lot input and output data are acquired from it. The TAM network with three pruning rules for reducing links and nodes at the layer is called fuzzy TAM network. In this paper, we apply fuzzy TAM network to pattern analysis of leadership type for organizational leader and show its usefulness. Here, criteria of input layer and target value of output layer are the value and leadership related personality type variables of the Egogram and Enneagram, respectively.

Pattern Analysis of Core Competency Model for Subcontractors of Construction Companies Using Fuzzy TAM Network (퍼지 TAM 네트워크를 이용한 건설협력업체 핵심역량모델의 패턴분석)

  • Kim, Sung-Eun;Hwang, Seung-Gook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.86-93
    • /
    • 2006
  • The TAM(Topographic Attentive Mapping) network based on a biologically-motivated neural network model is an especially effective one for pattern analysis. It is composed of of input layer, category layer, and output layer. Fuzzy rule, for input and output data are acquired from it. The TAM network with three pruning rules for reducing links and nodes at the layer is called fuzzy TAM network. In this paper, we apply fuzzy TAM network to pattern analysis of core competency model for subcontractors of construction companies and show its usefulness.

An Optimization Method of Neural Networks using Adaptive Regulraization, Pruning, and BIC (적응적 정규화, 프루닝 및 BIC를 이용한 신경망 최적화 방법)

  • 이현진;박혜영
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.1
    • /
    • pp.136-147
    • /
    • 2003
  • To achieve an optimal performance for a given problem, we need an integrative process of the parameter optimization via learning and the structure optimization via model selection. In this paper, we propose an efficient optimization method for improving generalization performance by considering the property of each sub-method and by combining them with common theoretical properties. First, weight parameters are optimized by natural gradient teaming with adaptive regularization, which uses a diverse error function. Second, the network structure is optimized by eliminating unnecessary parameters with natural pruning. Through iterating these processes, candidate models are constructed and evaluated based on the Bayesian Information Criterion so that an optimal one is finally selected. Through computational experiments on benchmark problems, we confirm the weight parameter and structure optimization performance of the proposed method.

  • PDF

Multi-classification Sensitive Image Detection Method Based on Lightweight Convolutional Neural Network

  • Yueheng Mao;Bin Song;Zhiyong Zhang;Wenhou Yang;Yu Lan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1433-1449
    • /
    • 2023
  • In recent years, the rapid development of social networks has led to a rapid increase in the amount of information available on the Internet, which contains a large amount of sensitive information related to pornography, politics, and terrorism. In the aspect of sensitive image detection, the existing machine learning algorithms are confronted with problems such as large model size, long training time, and slow detection speed when auditing and supervising. In order to detect sensitive images more accurately and quickly, this paper proposes a multiclassification sensitive image detection method based on lightweight Convolutional Neural Network. On the basis of the EfficientNet model, this method combines the Ghost Module idea of the GhostNet model and adds the SE channel attention mechanism in the Ghost Module for feature extraction training. The experimental results on the sensitive image data set constructed in this paper show that the accuracy of the proposed method in sensitive information detection is 94.46% higher than that of the similar methods. Then, the model is pruned through an ablation experiment, and the activation function is replaced by Hard-Swish, which reduces the parameters of the original model by 54.67%. Under the condition of ensuring accuracy, the detection time of a single image is reduced from 8.88ms to 6.37ms. The results of the experiment demonstrate that the method put forward has successfully enhanced the precision of identifying multi-class sensitive images, significantly decreased the number of parameters in the model, and achieved higher accuracy than comparable algorithms while using a more lightweight model design.

Elimination of Redundant Input Information and Parameters during Neural Network Training (신경망 학습 과정중 불필요한 입력 정보 및 파라미터들의 제거)

  • Won, Yong-Gwan;Park, Gwang-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.439-448
    • /
    • 1996
  • Extraction and selection of the informative features play a central role in pattern recognition. This paper describes a modified back-propagation algorithm that performs selection of the informative features and trains a neural network simultaneously. The algorithm is mainly composed of three repetitive steps : training, connection pruning, and input unit elimination. Afer initial training, the connections that have small magnitude are first pruned. Any unit that has a small number of connections to the hidden units is deleted,which is equivalent to excluding the feature corresponding to that unit.If the error increases,the network is retraned,again followed by connection pruning and input unit elimination.As a result,the algorithm selects the most im-portant features in the measurement space without a transformation to another space.Also,the selected features are the most-informative ones for the classification,because feature selection is tightly coupled with the classifi-cation performance.This algorithm helps avoid measurement of redundant or less informative features,which may be expensive.Furthermore,the final network does not include redundant parameters,i.e.,weights and biases,that may cause degradation of classification performance.In applications,the algorithm preserves the most informative features and significantly reduces the dimension of the feature vectors whiout performance degradation.

  • PDF

A Study on Developing Computer Models of Neuropsychiatric Diseases (신경정신질환의 컴퓨터모델 개발에 관한 연구)

  • Koh, In-Song;Park, Jeong-Wook
    • Korean Journal of Biological Psychiatry
    • /
    • v.6 no.1
    • /
    • pp.12-20
    • /
    • 1999
  • In order to understand the pathogenesis and progression of some synaptic loss related neuropsychiatric diseases, We attempted to develop a computer model in this study. We made a simple autoassociative memory network remembering numbers, transformed it into a disease model by pruning synapses, and measured its memory performance as a function of synaptic deletion. Decline in performance was measured as amount of synaptic loss increases and its mode of decline is sudden or gradual according to the mode of synaptic pruning. The developed computer model demonstrated how synaptic loss could cause memory impairment through a series of computer simulations, and suggested a new way of research in neuropsychiatry.

  • PDF

Dynamic Filter Pruning for Compression of Deep Neural Network. (동적 필터 프루닝 기법을 이용한 심층 신경망 압축)

  • Cho, InCheon;Bae, SungHo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.675-679
    • /
    • 2020
  • 최근 이미지 분류의 성능 향상을 위해 깊은 레이어와 넓은 채널을 가지는 모델들이 제안되어져 왔다. 높은 분류 정확도를 보이는 모델을 제안하는 것은 과한 컴퓨팅 파워와 계산시간을 요구한다. 본 논문에서는 이미지 분류 기법에서 사용되는 딥 뉴럴 네트워크 모델에 있어, 프루닝 방법을 통해 상대적으로 불필요한 가중치를 제거함과 동시에 분류 정확도 하락을 최소로 하는 동적 필터 프루닝 방법을 제시한다. 원샷 프루닝 기법, 정적 필터 프루닝 기법과 다르게 제거된 가중치에 대해서 소생 기회를 제공함으로써 더 좋은 성능을 보인다. 또한, 재학습이 필요하지 않기 때문에 빠른 계산 속도와 적은 컴퓨팅 파워를 보장한다. ResNet20 에서 CIFAR10 데이터셋에 대하여 실험한 결과 약 50%의 압축률에도 88.74%의 분류 정확도를 보였다.

  • PDF

Compression of CNN Using Local Nonlinear Quantization in MPEG-NNR (MPEG-NNR 의 지역 비선형 양자화를 이용한 CNN 압축)

  • Lee, Jeong-Yeon;Moon, Hyeon-Cheol;Kim, Sue-Jeong;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.662-663
    • /
    • 2020
  • 최근 MPEG 에서는 인공신경망 모델을 다양한 딥러닝 프레임워크에서 상호운용 가능한 포맷으로 압축 표현할 수 있는 NNR(Compression of Neural Network for Multimedia Content Description and Analysis) 표준화를 진행하고 있다. 본 논문에서는 MPEG-NNR 에서 CNN 모델을 압축하기 위한 지역 비선형 양자화(Local Non-linear Quantization: LNQ) 기법을 제시한다. 제안하는 LNQ 는 균일 양자화된 CNN 모델의 각 계층의 가중치 행렬 블록 단위로 추가적인 비선형 양자화를 적용한다. 또한, 제안된 LNQ 는 가지치기(pruning)된 모델의 경우 블록내의 영(zero) 값의 가중치들은 그대로 전송하고 영이 아닌 가중치만을 이진 군집화를 적용한다. 제안 기법은 음성 분류를 위한 CNN 모델(DCASE Task)의 압축 실험에서 기존 균일 양자화를 대비 동일한 분류 성능에서 약 1.78 배 압축 성능 향상이 있음을 확인하였다.

  • PDF

Parameter-Efficient Neural Networks Using Template Reuse (템플릿 재사용을 통한 패러미터 효율적 신경망 네트워크)

  • Kim, Daeyeon;Kang, Woochul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.169-176
    • /
    • 2020
  • Recently, deep neural networks (DNNs) have brought revolutions to many mobile and embedded devices by providing human-level machine intelligence for various applications. However, high inference accuracy of such DNNs comes at high computational costs, and, hence, there have been significant efforts to reduce computational overheads of DNNs either by compressing off-the-shelf models or by designing a new small footprint DNN architecture tailored to resource constrained devices. One notable recent paradigm in designing small footprint DNN models is sharing parameters in several layers. However, in previous approaches, the parameter-sharing techniques have been applied to large deep networks, such as ResNet, that are known to have high redundancy. In this paper, we propose a parameter-sharing method for already parameter-efficient small networks such as ShuffleNetV2. In our approach, small templates are combined with small layer-specific parameters to generate weights. Our experiment results on ImageNet and CIFAR100 datasets show that our approach can reduce the size of parameters by 15%-35% of ShuffleNetV2 while achieving smaller drops in accuracies compared to previous parameter-sharing and pruning approaches. We further show that the proposed approach is efficient in terms of latency and energy consumption on modern embedded devices.

Compression of DNN Integer Weight using Video Encoder (비디오 인코더를 통한 딥러닝 모델의 정수 가중치 압축)

  • Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.778-789
    • /
    • 2021
  • Recently, various lightweight methods for using Convolutional Neural Network(CNN) models in mobile devices have emerged. Weight quantization, which lowers bit precision of weights, is a lightweight method that enables a model to be used through integer calculation in a mobile environment where GPU acceleration is unable. Weight quantization has already been used in various models as a lightweight method to reduce computational complexity and model size with a small loss of accuracy. Considering the size of memory and computing speed as well as the storage size of the device and the limited network environment, this paper proposes a method of compressing integer weights after quantization using a video codec as a method. To verify the performance of the proposed method, experiments were conducted on VGG16, Resnet50, and Resnet18 models trained with ImageNet and Places365 datasets. As a result, loss of accuracy less than 2% and high compression efficiency were achieved in various models. In addition, as a result of comparison with similar compression methods, it was verified that the compression efficiency was more than doubled.