• 제목/요약/키워드: a Deep neural network

검색결과 1,951건 처리시간 0.024초

Implementation of Low-cost Autonomous Car for Lane Recognition and Keeping based on Deep Neural Network model

  • Song, Mi-Hwa
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권1호
    • /
    • pp.210-218
    • /
    • 2021
  • CNN (Convolutional Neural Network), a type of deep learning algorithm, is a type of artificial neural network used to analyze visual images. In deep learning, it is classified as a deep neural network and is most commonly used for visual image analysis. Accordingly, an AI autonomous driving model was constructed through real-time image processing, and a crosswalk image of a road was used as an obstacle. In this paper, we proposed a low-cost model that can actually implement autonomous driving based on the CNN model. The most well-known deep neural network technique for autonomous driving is investigated and an end-to-end model is applied. In particular, it was shown that training and self-driving on a simulated road is possible through a practical approach to realizing lane detection and keeping.

임베디드 시스템에서의 객체 분류를 위한 인공 신경망 경량화 연구 (Neural Network Model Compression Algorithms for Image Classification in Embedded Systems)

  • 신희중;오현동
    • 로봇학회논문지
    • /
    • 제17권2호
    • /
    • pp.133-141
    • /
    • 2022
  • This paper introduces model compression algorithms which make a deep neural network smaller and faster for embedded systems. The model compression algorithms can be largely categorized into pruning, quantization and knowledge distillation. In this study, gradual pruning, quantization aware training, and knowledge distillation which learns the activation boundary in the hidden layer of the teacher neural network are integrated. As a large deep neural network is compressed and accelerated by these algorithms, embedded computing boards can run the deep neural network much faster with less memory usage while preserving the reasonable accuracy. To evaluate the performance of the compressed neural networks, we evaluate the size, latency and accuracy of the deep neural network, DenseNet201, for image classification with CIFAR-10 dataset on the NVIDIA Jetson Xavier.

심층신경망 기반의 뷰티제품 추천시스템 (Deep Neural Network-Based Beauty Product Recommender)

  • 송희석
    • Journal of Information Technology Applications and Management
    • /
    • 제26권6호
    • /
    • pp.89-101
    • /
    • 2019
  • Many researchers have been focused on designing beauty product recommendation system for a long time because of increased need of customers for personalized and customized recommendation in beauty product domain. In addition, as the application of the deep neural network technique becomes active recently, various collaborative filtering techniques based on the deep neural network have been introduced. In this context, this study proposes a deep neural network model suitable for beauty product recommendation by applying Neural Collaborative Filtering and Generalized Matrix Factorization (NCF + GMF) to beauty product recommendation. This study also provides an implementation of web API system to commercialize the proposed recommendation model. The overall performance of the NCF + GMF model was the best when the beauty product recommendation problem was defined as the estimation rating score problem and the binary classification problem. The NCF + GMF model showed also high performance in the top N recommendation.

유전 알고리즘 기반의 심층 학습 신경망 구조와 초모수 최적화 (Genetic algorithm based deep learning neural network structure and hyperparameter optimization)

  • 이상협;강도영;박장식
    • 한국멀티미디어학회논문지
    • /
    • 제24권4호
    • /
    • pp.519-527
    • /
    • 2021
  • Alzheimer's disease is one of the challenges to tackle in the coming aging era and is attempting to diagnose and predict through various biomarkers. While the application of various deep learning-based technologies as powerful imaging technologies has recently expanded across the medical industry, empirical design is not easy because there are various deep earning neural networks architecture and categorical hyperparameters that rely on problems and data to solve. In this paper, we show the possibility of optimizing a deep learning neural network structure and hyperparameters for Alzheimer's disease classification in amyloid brain images in a representative deep earning neural networks architecture using genetic algorithms. It was observed that the optimal deep learning neural network structure and hyperparameter were chosen as the values of the experiment were converging.

Network Traffic Classification Based on Deep Learning

  • Li, Junwei;Pan, Zhisong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4246-4267
    • /
    • 2020
  • As the network goes deep into all aspects of people's lives, the number and the complexity of network traffic is increasing, and traffic classification becomes more and more important. How to classify them effectively is an important prerequisite for network management and planning, and ensuring network security. With the continuous development of deep learning, more and more traffic classification begins to use it as the main method, which achieves better results than traditional classification methods. In this paper, we provide a comprehensive review of network traffic classification based on deep learning. Firstly, we introduce the research background and progress of network traffic classification. Then, we summarize and compare traffic classification based on deep learning such as stack autoencoder, one-dimensional convolution neural network, two-dimensional convolution neural network, three-dimensional convolution neural network, long short-term memory network and Deep Belief Networks. In addition, we compare traffic classification based on deep learning with other methods such as based on port number, deep packets detection and machine learning. Finally, the future research directions of network traffic classification based on deep learning are prospected.

ONNX기반 스파이킹 심층 신경망 변환 도구 (Conversion Tools of Spiking Deep Neural Network based on ONNX)

  • 박상민;허준영
    • 한국인터넷방송통신학회논문지
    • /
    • 제20권2호
    • /
    • pp.165-170
    • /
    • 2020
  • 스파이킹 신경망은 기존 신경망과 다른 메커니즘으로 동작한다. 기존 신경망은 신경망을 구성하는 뉴런으로 들어오는 입력 값에 대해 생물학적 메커니즘을 고려하지 않은 활성화 함수를 거쳐 다음 뉴런으로 출력 값을 전달한다. 뿐만 아니라 VGGNet, ResNet, SSD, YOLO와 같은 심층 구조를 사용한 좋은 성과들이 있었다. 반면 스파이킹 신경망은 기존 활성화함수 보다 실제 뉴런의 생물학적 메커니즘과 유사하게 동작하는 방식이지만 스파이킹 뉴런을 사용한 심층구조에 대한 연구는 기존 뉴런을 사용한 심층 신경망과 비교해 활발히 진행되지 않았다. 본 논문은 기존 뉴런으로 만들어진 심층 신경망 모델을 변환 툴에 로드하여 기존 뉴런을 스파이킹 뉴런으로 대체하여 스파이킹 심층 신경망으로 변환하는 방법에 대해 제안한다.

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • 한국정보기술학회 영문논문지
    • /
    • 제8권2호
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

GRADIENTS IN A DEEP NEURAL NETWORK AND THEIR PYTHON IMPLEMENTATIONS

  • Park, Young Ho
    • Korean Journal of Mathematics
    • /
    • 제30권1호
    • /
    • pp.131-146
    • /
    • 2022
  • This is an expository article about the gradients in deep neural network. It is hard to find a place where gradients in a deep neural network are dealt in details in a systematic and mathematical way. We review and compute the gradients and Jacobians to derive formulas for gradients which appear in the backpropagation and implement them in vectorized forms in Python.

얼굴인식 성능 향상을 위한 얼굴 전역 및 지역 특징 기반 앙상블 압축 심층합성곱신경망 모델 제안 (Compressed Ensemble of Deep Convolutional Neural Networks with Global and Local Facial Features for Improved Face Recognition)

  • 윤경신;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.1019-1029
    • /
    • 2020
  • In this paper, we propose a novel knowledge distillation algorithm to create an compressed deep ensemble network coupled with the combined use of local and global features of face images. In order to transfer the capability of high-level recognition performances of the ensemble deep networks to a single deep network, the probability for class prediction, which is the softmax output of the ensemble network, is used as soft target for training a single deep network. By applying the knowledge distillation algorithm, the local feature informations obtained by training the deep ensemble network using facial subregions of the face image as input are transmitted to a single deep network to create a so-called compressed ensemble DCNN. The experimental results demonstrate that our proposed compressed ensemble deep network can maintain the recognition performance of the complex ensemble deep networks and is superior to the recognition performance of a single deep network. In addition, our proposed method can significantly reduce the storage(memory) space and execution time, compared to the conventional ensemble deep networks developed for face recognition.

DeepAct: A Deep Neural Network Model for Activity Detection in Untrimmed Videos

  • Song, Yeongtaek;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제14권1호
    • /
    • pp.150-161
    • /
    • 2018
  • We propose a novel deep neural network model for detecting human activities in untrimmed videos. The process of human activity detection in a video involves two steps: a step to extract features that are effective in recognizing human activities in a long untrimmed video, followed by a step to detect human activities from those extracted features. To extract the rich features from video segments that could express unique patterns for each activity, we employ two different convolutional neural network models, C3D and I-ResNet. For detecting human activities from the sequence of extracted feature vectors, we use BLSTM, a bi-directional recurrent neural network model. By conducting experiments with ActivityNet 200, a large-scale benchmark dataset, we show the high performance of the proposed DeepAct model.