• Title/Summary/Keyword: layer pruning

Search Result 29, Processing Time 0.03 seconds

Performance Analysis of Layer Pruning on Sphere Decoding in MIMO Systems

  • Karthikeyan, Madurakavi;Saraswady, D.
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.564-571
    • /
    • 2014
  • Sphere decoding (SD) for multiple-input and multiple-output systems is a well-recognized approach for achieving near-maximum likelihood performance with reduced complexity. SD is a tree search process, whereby a large number of nodes can be searched in an effort to find an estimation of a transmitted symbol vector. In this paper, a simple and generalized approach called layer pruning is proposed to achieve complexity reduction in SD. Pruning a layer from a search process reduces the total number of nodes in a sphere search. The symbols corresponding to the pruned layer are obtained by adopting a QRM-MLD receiver. Simulation results show that the proposed method reduces the number of nodes to be searched for decoding the transmitted symbols by maintaining negligible performance loss. The proposed technique reduces the complexity by 35% to 42% in the low and medium signal-to-noise ratio regime. To demonstrate the potential of our method, we compare the results with another well-known method - namely, probabilistic tree pruning SD.

Optimized Network Pruning Method for Li-ion Batteries State-of-charge Estimation on Robot Embedded System (로봇 임베디드 시스템에서 리튬이온 배터리 잔량 추정을 위한 신경망 프루닝 최적화 기법)

  • Dong Hyun Park;Hee-deok Jang;Dong Eui Chang
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.88-92
    • /
    • 2023
  • Lithium-ion batteries are actively used in various industrial sites such as field robots, drones, and electric vehicles due to their high energy efficiency, light weight, long life span, and low self-discharge rate. When using a lithium-ion battery in a field, it is important to accurately estimate the SoC (State of Charge) of batteries to prevent damage. In recent years, SoC estimation using data-based artificial neural networks has been in the spotlight, but it has been difficult to deploy in the embedded board environment at the actual site because the computation is heavy and complex. To solve this problem, neural network lightening technologies such as network pruning have recently attracted attention. When pruning a neural network, the performance varies depending on which layer and how much pruning is performed. In this paper, we introduce an optimized pruning technique by improving the existing pruning method, and perform a comparative experiment to analyze the results.

Hierarchical Ann Classification Model Combined with the Adaptive Searching Strategy (적응적 탐색 전략을 갖춘 계층적 ART2 분류 모델)

  • 김도현;차의영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.649-658
    • /
    • 2003
  • We propose a hierarchical architecture of ART2 Network for performance improvement and fast pattern classification model using fitness selection. This hierarchical network creates coarse clusters as first ART2 network layer by unsupervised learning, then creates fine clusters of the each first layer as second network layer by supervised learning. First, it compares input pattern with each clusters of first layer and select candidate clusters by fitness measure. We design a optimized fitness function for pruning clusters by measuring relative distance ratio between a input pattern and clusters. This makes it possible to improve speed and accuracy. Next, it compares input pattern with each clusters connected with selected clusters and finds winner cluster. Finally it classifies the pattern by a label of the winner cluster. Results of our experiments show that the proposed method is more accurate and fast than other approaches.

Neural Network Model Compression Algorithms for Image Classification in Embedded Systems (임베디드 시스템에서의 객체 분류를 위한 인공 신경망 경량화 연구)

  • Shin, Heejung;Oh, Hyondong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.133-141
    • /
    • 2022
  • This paper introduces model compression algorithms which make a deep neural network smaller and faster for embedded systems. The model compression algorithms can be largely categorized into pruning, quantization and knowledge distillation. In this study, gradual pruning, quantization aware training, and knowledge distillation which learns the activation boundary in the hidden layer of the teacher neural network are integrated. As a large deep neural network is compressed and accelerated by these algorithms, embedded computing boards can run the deep neural network much faster with less memory usage while preserving the reasonable accuracy. To evaluate the performance of the compressed neural networks, we evaluate the size, latency and accuracy of the deep neural network, DenseNet201, for image classification with CIFAR-10 dataset on the NVIDIA Jetson Xavier.

Optimization And Performance Analysis Via GAN Model Layer Pruning (레이어 프루닝을 이용한 생성적 적대 신경망 모델 경량화 및 성능 분석 연구)

  • Kim, Dong-hwi;Park, Sang-hyo;Bae, Byeong-jun;Cho, Suk-hee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.80-81
    • /
    • 2021
  • 딥 러닝 모델 사용에 있어서, 일반적인 사용자가 이용할 수 있는 하드웨어 리소스는 제한적이기 때문에 기존 모델을 경량화 할 수 있는 프루닝 방법을 통해 제한적인 리소스를 효과적으로 활용할 수 있도록 한다. 그 방법으로, 여러 딥 러닝 모델들 중 비교적 파라미터 수가 많은 것으로 알려진 GAN 아키텍처에 네트워크 프루닝을 적용함으로써 비교적 무거운 모델을 적은 파라미터를 통해 학습할 수 있는 방법을 제시한다. 또한, 본 논문을 통해 기존의 SRGAN 논문에서 가장 효과적인 결과로 제시했던 16 개의 residual block 의 개수를 실제로 줄여 봄으로써 기존 논문에서 제시했던 결과와의 차이에 대해 서술한다.

  • PDF

An Learning Algorithm to find the Optimized Network Structure in an Incremental Model (점증적 모델에서 최적의 네트워크 구조를 구하기 위한 학습 알고리즘)

  • Lee Jong-Chan;Cho Sang-Yeop
    • Journal of Internet Computing and Services
    • /
    • v.4 no.5
    • /
    • pp.69-76
    • /
    • 2003
  • In this paper we show a new learning algorithm for pattern classification. This algorithm considered a scheme to find a solution to a problem of incremental learning algorithm when the structure becomes too complex by noise patterns included in learning data set. Our approach for this problem uses a pruning method which terminates the learning process with a predefined criterion. In this process, an iterative model with 3 layer feedforward structure is derived from the incremental model by an appropriate manipulations. Notice that this network structure is not full-connected between upper and lower layers. To verify the effectiveness of pruning method, this network is retrained by EBP. From this results, we can find out that the proposed algorithm is effective, as an aspect of a system performence and the node number included in network structure.

  • PDF

Apply Locally Weight Parameter Elimination for CNN Model Compression (지역적 가중치 파라미터 제거를 적용한 CNN 모델 압축)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1165-1171
    • /
    • 2018
  • CNN requires a large amount of computation and memory in the process of extracting the feature of the object. Also, It is trained from the network that the user has configured, and because the structure of the network is fixed, it can not be modified during training and it is also difficult to use it in a mobile device with low computing power. To solve these problems, we apply a pruning method to the pre-trained weight file to reduce computation and memory requirements. This method consists of three steps. First, all the weights of the pre-trained network file are retrieved for each layer. Second, take an absolute value for the weight of each layer and obtain the average. After setting the average to a threshold, remove the weight below the threshold. Finally, the network file applied the pruning method is re-trained. We experimented with LeNet-5 and AlexNet, achieved 31x on LeNet-5 and 12x on AlexNet.

A Pruning Algorithm for Network Structure Optimization in the Forecasting Climate System Using Neural Network (신경망을 이용한 기상예측시스템에서 망구조 최적화를 위한 Pruning 알고리즘)

  • Lee, Kee-Jun;Kang, Myung-A;Jung, Chai-Yeoung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.385-391
    • /
    • 2000
  • Recently, neural network research for forecasting the consecutive controlling rules of the future is being progressed, using the series data which are different from the traditional statistical analysis methods. In this paper, we suggest the pruning algorithm for the fast and exact weather forecast that excludes the hidden layer of the early optional designed nenral network. There are perform the weather forecast experiments using the 22080 kinds of weather data gathered from 1987 to 1996 for proving the efficiency of this suggested algorithm. Through the experiments, the early optional composed $26{\times}50{\times}1$ nenral network became the most suitable $26{\times}2{\times}1$ structure through the pruning algorithm suggested, in the optimum neural network $26{\times}2{\times}1$, in the case of the error temperature ${\pm}0.5^{\circ}C$, the average was 33.55%, in the case of ${\pm}1^{\circ}C$, the average was 61.57%, they showed more superior than the average 29.31% and 54.47% of the optional designed structure, also. we can reduce the calculation frequency more than maximum 25 times as compared with the optional sturcture neural network in the calculation frequencies.

  • PDF

A self-organizing algorithm for multi-layer neural networks (다층 신경회로망을 위한 자기 구성 알고리즘)

  • 이종석;김재영;정승범;박철훈
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.55-65
    • /
    • 2004
  • When a neural network is used to solve a given problem it is necessary to match the complexity of the network to that of the problem because the complexity of the network significantly affects its learning capability and generalization performance. Thus, it is desirable to have an algorithm that can find appropriate network structures in a self-organizing way. This paper proposes algorithms which automatically organize feed forward multi-layer neural networks with sigmoid hidden neurons for given problems. Using both constructive procedures and pruning procedures, the proposed algorithms try to find the near optimal network, which is compact and shows good generalization performance. The performances of the proposed algorithms are tested on four function regression problems. The results demonstrate that our algorithms successfully generate near-optimal networks in comparison with the previous method and the neural networks of fixed topology.

Performance Analysis of Optimal Neural Network structural BPN based on character value of Hidden node (은닉노드의 특징 값을 기반으로 한 최적신경망 구조의 BPN성능분석)

  • 강경아;이기준;정채영
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.2
    • /
    • pp.30-36
    • /
    • 2000
  • The hidden node plays a role of the functional units that classifies the features of input pattern in the given question. Therefore, a neural network that consists of the number of a suitable optimum hidden node has be on the rise as a factor that has an important effect upon a result. However there is a problem that decides the number of hidden nodes based on back-propagation learning algorithm. If the number of hidden nodes is designated very small perfect learning is not done because the input pattern given cannot be classified enough. On the other hand, if designated a lot, overfitting occurs due to the unnecessary execution of operation and extravagance of memory point. So, the recognition rate is been law and the generality is fallen. Therefore, this paper suggests a method that decides the number of neural network node with feature information consisted of the parameter of learning algorithm. It excludes a node in the Pruning target, that has a maximum value among the feature value obtained and compares the average of the rest of hidden node feature value with the feature value of each hidden node, and then would like to improve the learning speed of neural network deciding the optimum structure of the multi-layer neural network as pruning the hidden node that has the feature value smaller than the average.

  • PDF