• Title/Summary/Keyword: network pruning

Search Result 82, Processing Time 0.024 seconds

Anomaly detection in particulate matter sensor using hypothesis pruning generative adversarial network

  • Park, YeongHyeon;Park, Won Seok;Kim, Yeong Beom
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.511-523
    • /
    • 2021
  • The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.

Hierarchical Ann Classification Model Combined with the Adaptive Searching Strategy (적응적 탐색 전략을 갖춘 계층적 ART2 분류 모델)

  • 김도현;차의영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.649-658
    • /
    • 2003
  • We propose a hierarchical architecture of ART2 Network for performance improvement and fast pattern classification model using fitness selection. This hierarchical network creates coarse clusters as first ART2 network layer by unsupervised learning, then creates fine clusters of the each first layer as second network layer by supervised learning. First, it compares input pattern with each clusters of first layer and select candidate clusters by fitness measure. We design a optimized fitness function for pruning clusters by measuring relative distance ratio between a input pattern and clusters. This makes it possible to improve speed and accuracy. Next, it compares input pattern with each clusters connected with selected clusters and finds winner cluster. Finally it classifies the pattern by a label of the winner cluster. Results of our experiments show that the proposed method is more accurate and fast than other approaches.

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

(Adaptive Structure of Modular Wavelet Neural Network Using Growing and Pruning Algorithm) (성장과 소거 알고리즘을 이용한 모듈화된 웨이블렛 신경망의 적응구조 설계)

  • Seo, Jae-Yong;Kim, Yong-Taek;Jo, Hyeon-Chan;Jeon, Hong-Tae
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.39 no.1
    • /
    • pp.16-23
    • /
    • 2002
  • In this paper, we propose the growing and pruning algorithm to design the optimal structure of modular wavelet neural network(MWNN) with F-projection and geometric growing criterion. Geometric growing criterion consists of estimated error criterion considering local error and angle criterion which attempts to assign wavelet function that is nearly orthogonal to all other existing wavelet functions. These criteria provide a methodology which a network designer can construct MWNN according to one's intention. The proposed growing algorithm increases in number of module or the size of modules of MWNN. Also, the pruning algorithm eliminates unnecessary node of module or module from constructed MWNN to overcome the problem due to localized characteristic of wavelet neural network which is used to modules of MWNN. We apply the proposed constructing algorithm of the optimal structure of MWNN to approximation problems of 1-D function and 2-D function, and evaluate the effectiveness of the proposed algorithm.

Apply Locally Weight Parameter Elimination for CNN Model Compression (지역적 가중치 파라미터 제거를 적용한 CNN 모델 압축)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1165-1171
    • /
    • 2018
  • CNN requires a large amount of computation and memory in the process of extracting the feature of the object. Also, It is trained from the network that the user has configured, and because the structure of the network is fixed, it can not be modified during training and it is also difficult to use it in a mobile device with low computing power. To solve these problems, we apply a pruning method to the pre-trained weight file to reduce computation and memory requirements. This method consists of three steps. First, all the weights of the pre-trained network file are retrieved for each layer. Second, take an absolute value for the weight of each layer and obtain the average. After setting the average to a threshold, remove the weight below the threshold. Finally, the network file applied the pruning method is re-trained. We experimented with LeNet-5 and AlexNet, achieved 31x on LeNet-5 and 12x on AlexNet.

Review on Genetic Algorithms for Pattern Recognition (패턴 인식을 위한 유전 알고리즘의 개관)

  • Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.58-64
    • /
    • 2007
  • In pattern recognition field, there are many optimization problems having exponential search spaces. To solve of sequential search algorithms seeking sub-optimal solutions have been used. The algorithms have limitations of stopping at local optimums. Recently lots of researches attempt to solve the problems using genetic algorithms. This paper explains the huge search spaces of typical problems such as feature selection, classifier ensemble selection, neural network pruning, and clustering, and it reviews the genetic algorithms for solving them. Additionally we present several subjects worthy of noting as future researches.

A Formulation of Fuzzy TAM Network with Gabor Type Receptive Fields

  • Hayashi, Isao;Maeda, Hiromasa
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.620-623
    • /
    • 2003
  • The TAM (Topographic Attentive Mapping) network is a biologically-motivated neural network. Fuzzy rules are acquired from the TAM network by the pruning algorithm. In this paper we formulate a new input layer using Gabor function for TAU network to realize receptive field of human visual cortex.

  • PDF

Large Vocabulary Continuous Speech Recognition Based on Language Model Network (언어 모델 네트워크에 기반한 대어휘 연속 음성 인식)

  • 안동훈;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.543-551
    • /
    • 2002
  • In this paper, we present an efficient decoding method that performs in real time for 20k word continuous speech recognition task. Basic search method is a one-pass Viterbi decoder on the search space constructed from the novel language model network. With the consistent search space representation derived from various language models by the LM network, we incorporate basic pruning strategies, from which tokens alive constitute a dynamic search space. To facilitate post-processing, it produces a word graph and a N-best list subsequently. The decoder is tested on the database of 20k words and evaluated with respect to accuracy and RTF.

Contribution-Level-Based Opportunistic Flooding for Wireless Multihop Networks (무선 다중 홉 환경을 위한 기여도 기반의 기회적 플러딩 기법)

  • Byeon, Seung-gyu;Seo, Hyeong-yun;Kim, Jong-deok
    • Journal of KIISE
    • /
    • v.42 no.6
    • /
    • pp.791-800
    • /
    • 2015
  • In this paper, we propose the contribution-level-based opportunistic flooding in a wireless multihop network which achieves outstanding transmission efficiency and reliability. While the potential of the the predetermined relay node to fail in its receipt of broadcast packets is due to the inherent instability in wireless networks, our proposed flooding actually increases network reliability by applying the concept of opportunistic routing, whereby relay-node selection is dependent on the transmission result. Additionally, depending on the contribution level for the entire network, the proposed technique enhances transmission efficiency through priority adjustment and the removal of needless relay nodes. We use the NS-3 simulator to compare the proposed scheme with dominant pruning. The analysis results show the improved performance in both cases: by 35% compared with blind flooding from the perspective of the transmission efficiency, and by 20~70% compared to dominant pruning from the perspective of the reliability.

Structure Optimization of Neural Networks using Rough Set Theory (러프셋 이론을 이용한 신경망의 구조 최적화)

  • 정영준;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.03a
    • /
    • pp.49-52
    • /
    • 1998
  • Neural Network has good performance in pattern classification, control and many other fields by learning ability. However, there is effective rule or systematic approach to determine optimal structure. In this paper, we propose a new method to find optimal structure of feed-forward multi-layer neural network as a kind of pruning method. That eliminating redundant elements of neural network. To find redundant elements we analysis error and weight changing with Rough Set Theory, in condition of executing back-propagation leaning algorithm.

  • PDF