(그림 1) 기존 평형망과 잔여 신경망의 비교
(그림 2) 덴스넷(DenseNet)의 밀집 신경망 구조
(그림 3) 스퀴즈넷(SqueezeNet)의 파이어 모듈(Fire Mod-ule)
(그림 4) 모바일넷(MobileNet)의 합성곱 분해 구조
(그림 5) 셔플넷(ShuffleNet)의 채널 셔플 구조
(그림 6) 넷어탭트(NetAdapt)의 신경망 탐색 흐름
(그림 7) 엠나스넷(MNasNet)의 신경망 탐색 흐름
(그림 8) 가중치/채널 가지치기(Weight/Channel Pruning)의 예
(그림 9) 이진화(Binarization)를 통한 합성곱의 예
(그림 11) 강화학습을 통해 모델 압축/가속화 기법들을 자동 탐색하는 예
(그림 12) 스마트폰에서의 객체 인식 예
(그림 13) 스마트폰(iOS/Android)에서의 경량 딥러닝 모델들의 이미지 판별 실험 예
(그림 10) 전문가(Teachers) 모델과 숙련가(Student) 모델의 학습 결과 예
<표 1> 경량 딥러닝(Lightweight Deep Learning) 연구 동향
References
- K. He et al., "Deep Residual Learning for Image Recognition," in Proc. IEEE Conf. Comput. Vision Pattern Recognition , Las Vegars, NV, USA, June 2016, pp. 770-778.
- K. He et al., "Identity Mappings in Deep Residual Networks," in European Conference on Computer Vision , Springer, 2016, pp. 630-645
- G. Huang et al., "Densely Connected Convolutional Networks," in Proc. IEEE Conf. Computer Vision Pattern Recognition , Honolulu, HI, USA, July, 2017, pp. 2265-2269.
- F.N. Iandola et al., "SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and < 0.5MB model size," arXiv:1602.07360, 2016.
- A.G. Howard et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv:1704.04861, 2017.
- M. Sandler et al., "MobileNet V2: Inverted Residuals and Linear Bottlenecks," arXiv:1801.04381, 2018.
- X. Zhang et al., "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices," arXiv:1707.01083, 2017.
- M. Ningning et al., "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design," arXiv:1807.11164, 2018.
- T.J. Yang et al., "NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications," arXiv:1804.03230, 2018.
- M. Tan et al., MnasNet: Platform-Aware Neural Architecture Search for Mobile," arXiv:1807.11626, 2018.
- S. Han, H. Mao, and W.J. Dally, "Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding," arXiv:1510.00149, 2015.
- M. Rastegari et al., "XnorNet: ImageNet Classification Using Binary Convolutional Neural Networks," arXiv:1603.05279, 2016.
- K. Ullrich, E. Meeds, and M. Welling, "Soft Weight-Sharing for Neural Network Compression," arXiv:1702.04008, 2017.
- G. Hinton, O. Vinyals, and J. Dean, "Distilling the Knowledge in a Neural Network," arXiv: 1503.02531, 2015.
- T. Chen, I. Goodfellow, and J. Shlens, "Net2Net: Accelerating Learning via Knowledge Transfer," in Int. Conf. Learning Representation (ICLR), May 2016.
- J. Wu, J. Hou and W. Liu, "PocketFlow : An Automated Framework for Compressing and Accelerating Deep Neural Networks,". in Proc. Neural Inf. Process. Syst. (NIPS) , Montreal, Canada, Dec. 2018.
- Y. He et al., :AMC: AutoML for Model Compression and Acceleration on Mobile Devices," in Proc. Eur. Conf. Comput. Vision (ECCV) , Munich, Germany, Sept. 2018, pp. 784-800.
- https://www.xnor.ai/
- https://hyperconnect.com/