• Title/Summary/Keyword: 셋-프루닝

Search Result 6, Processing Time 0.022 seconds

A Smart Set-Pruning Trie for Packet Classification (패킷 분류를 위한 스마트 셋-프루닝 트라이)

  • Min, Seh-Won;Lee, Na-Ra;Lim, Hye-Sook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11B
    • /
    • pp.1285-1296
    • /
    • 2011
  • Packet classification is one of the basic and important functions of the Internet routers, and it became more important along with new emerging application programs requiring real-time transmission. Since packet classification should be accomplished in line-speed on each incoming input packet for multiple header fields, it becomes one of the challenges in designing Internet routers. Various packet classification algorithms have been proposed to provide the high-speed packet classification. Hierarchical approach achieves effective packet classification performance by significantly narrowing down the search space whenever a field lookup is completed. However, hierarchical approach involves back-tracking problem. In order to solve the problem, set-pruning trie and grid-of-trie algorithms are proposed. However, the algorithm either causes excessive node duplication or heavy pre-computation. In this paper, we propose a smart set-pruning trie which reduces the number of node duplication in the set-pruning trie by the simple merging of the lower-level tries. Simulation result shows that the proposed trie has the reduced number of copied nodes by 2-8% compared with the set-pruning trie.

Application and Performance Analysis of Double Pruning Method for Deep Neural Networks (심층신경망의 더블 프루닝 기법의 적용 및 성능 분석에 관한 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Oh, Seung-Yeon;Lee, Mun-Hyung;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.23-34
    • /
    • 2020
  • Recently, the artificial intelligence deep learning field has been hard to commercialize due to the high computing power and the price problem of computing resources. In this paper, we apply a double pruning techniques to evaluate the performance of the in-depth neural network and various datasets. Double pruning combines basic Network-slimming and Parameter-prunning. Our proposed technique has the advantage of reducing the parameters that are not important to the existing learning and improving the speed without compromising the learning accuracy. After training various datasets, the pruning ratio was increased to reduce the size of the model.We confirmed that MobileNet-V3 showed the highest performance as a result of NetScore performance analysis. We confirmed that the performance after pruning was the highest in MobileNet-V3 consisting of depthwise seperable convolution neural networks in the Cifar 10 dataset, and VGGNet and ResNet in traditional convolutional neural networks also increased significantly.

A Hierarchical Packet Classification Algorithm Using Set-Pruning Binary Search Tree (셋-프루닝 이진 검색 트리를 이용한 계층적 패킷 분류 알고리즘)

  • Lee, Soo-Hyun;Lim, Hye-Sook
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.6
    • /
    • pp.482-496
    • /
    • 2008
  • Packet classification in the Internet routers requires multi-dimensional search for multiple header fields for every incoming packet in wire-speed, hence packet classification is one of the most important challenges in router design. Hierarchical packet classification is one of the most effective solutions since search space is remarkably reduced every time a field search is completed. However, hierarchical structures have two intrinsic issues; back-tracking and empty internal nodes. In this paper, we propose a new hierarchical packet classification algorithm which solves both problems. The back-tracking is avoided by using the set-pruning and the empty internal nodes are avoided by applying the binary search tree. Simulation result shows that the proposed algorithm provides significant improvement in search speed without increasing the amount of memory requirement. We also propose an optimization technique applying controlled rule copy in set-pruning.

Dynamic Filter Pruning for Compression of Deep Neural Network. (동적 필터 프루닝 기법을 이용한 심층 신경망 압축)

  • Cho, InCheon;Bae, SungHo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.675-679
    • /
    • 2020
  • 최근 이미지 분류의 성능 향상을 위해 깊은 레이어와 넓은 채널을 가지는 모델들이 제안되어져 왔다. 높은 분류 정확도를 보이는 모델을 제안하는 것은 과한 컴퓨팅 파워와 계산시간을 요구한다. 본 논문에서는 이미지 분류 기법에서 사용되는 딥 뉴럴 네트워크 모델에 있어, 프루닝 방법을 통해 상대적으로 불필요한 가중치를 제거함과 동시에 분류 정확도 하락을 최소로 하는 동적 필터 프루닝 방법을 제시한다. 원샷 프루닝 기법, 정적 필터 프루닝 기법과 다르게 제거된 가중치에 대해서 소생 기회를 제공함으로써 더 좋은 성능을 보인다. 또한, 재학습이 필요하지 않기 때문에 빠른 계산 속도와 적은 컴퓨팅 파워를 보장한다. ResNet20 에서 CIFAR10 데이터셋에 대하여 실험한 결과 약 50%의 압축률에도 88.74%의 분류 정확도를 보였다.

  • PDF

Compression and Acceleration of Face Detector using L1 Loss and Channel Pruning (L1 목적 함수와 채널 프루닝을 이용한 얼굴 검출기 경량화)

  • Lee, Seok Hee;Jang, Young Kyun;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.40-42
    • /
    • 2020
  • 본 논문에서는 합성곱 기반의 얼굴 검출기 Dual Shot Face Detector (DSFD)에 대하여, 특징점 맵의 희소화와 채널 프루닝 목적 함수를 사용하여 네트웍 경량화를 수행하였다. 특징점 맵을 희소화하기 위해 L1 목적 함수를 사용했고, 특징점 맵의 채널 프루닝을 하기 위해 채널 최대값이 가장 낮은 채널들의 합을 최소화 시키는 목적함수를 적용했다. 기존의 신경망은 특징점 맵 희소화 비율이 45%였고 두 목적 함수를 적용했을 때 69.67% 로 희소화 비율이 높아진 것을 확인했다. 얼굴 검출 성능을 다양한 조명, 크기, 환경, 각도, 표정의 얼굴들을 포함하는 영상들로 이뤄진 Wider Face 데이터 셋으로 실험한 결과, average precision은 하락 했고 easy validation set에서 0.9257, hard validation set에서 0.8363 였다.

  • PDF

Dynamic Adjustment of the Pruning Threshold in Deep Compression (Deep Compression의 프루닝 문턱값 동적 조정)

  • Lee, Yeojin;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.3
    • /
    • pp.99-103
    • /
    • 2021
  • Recently, convolutional neural networks (CNNs) have been widely utilized due to their outstanding performance in various computer vision fields. However, due to their computational-intensive and high memory requirements, it is difficult to deploy CNNs on hardware platforms that have limited resources, such as mobile devices and IoT devices. To address these limitations, a neural network compression research is underway to reduce the size of neural networks while maintaining their performance. This paper proposes a CNN compression technique that dynamically adjusts the thresholds of pruning, one of the neural network compression techniques. Unlike the conventional pruning that experimentally or heuristically sets the thresholds that determine the weights to be pruned, the proposed technique can dynamically find the optimal thresholds that prevent accuracy degradation and output the light-weight neural network in less time. To validate the performance of the proposed technique, the LeNet was trained using the MNIST dataset and the light-weight LeNet could be automatically obtained 1.3 to 3 times faster without loss of accuracy.