• Title/Summary/Keyword: Deep learning neural network

Search Result 1,723, Processing Time 0.024 seconds

Deep Neural Network Weight Transformation for Spiking Neural Network Inference (스파이킹 신경망 추론을 위한 심층 신경망 가중치 변환)

  • Lee, Jung Soo;Heo, Jun Young
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.26-30
    • /
    • 2022
  • Spiking neural network is a neural network that applies the working principle of real brain neurons. Due to the biological mechanism of neurons, it consumes less power for training and reasoning than conventional neural networks. Recently, as deep learning models become huge and operating costs increase exponentially, the spiking neural network is attracting attention as a third-generation neural network that connects convolution neural networks and recurrent neural networks, and related research is being actively conducted. However, in order to apply the spiking neural network model to the industry, a lot of research still needs to be done, and the problem of model retraining to apply a new model must also be solved. In this paper, we propose a method to minimize the cost of model retraining by extracting the weights of the existing trained deep learning model and converting them into the weights of the spiking neural network model. In addition, it was found that weight conversion worked correctly by comparing the results of inference using the converted weights with the results of the existing model.

A Novel Face Recognition Algorithm based on the Deep Convolution Neural Network and Key Points Detection Jointed Local Binary Pattern Methodology

  • Huang, Wen-zhun;Zhang, Shan-wen
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.363-372
    • /
    • 2017
  • This paper presents a novel face recognition algorithm based on the deep convolution neural network and key point detection jointed local binary pattern methodology to enhance the accuracy of face recognition. We firstly propose the modified face key feature point location detection method to enhance the traditional localization algorithm to better pre-process the original face images. We put forward the grey information and the color information with combination of a composite model of local information. Then, we optimize the multi-layer network structure deep learning algorithm using the Fisher criterion as reference to adjust the network structure more accurately. Furthermore, we modify the local binary pattern texture description operator and combine it with the neural network to overcome drawbacks that deep neural network could not learn to face image and the local characteristics. Simulation results demonstrate that the proposed algorithm obtains stronger robustness and feasibility compared with the other state-of-the-art algorithms. The proposed algorithm also provides the novel paradigm for the application of deep learning in the field of face recognition which sets the milestone for further research.

Dust Prediction System based on Incremental Deep Learning (증강형 딥러닝 기반 미세먼지 예측 시스템)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.301-307
    • /
    • 2023
  • Deep learning requires building a deep neural network, collecting a large amount of training data, and then training the built neural network for a long time. If training does not proceed properly or overfitting occurs, training will fail. When using deep learning tools that have been developed so far, it takes a lot of time to collect training data and learn. However, due to the rapid advent of the mobile environment and the increase in sensor data, the demand for real-time deep learning technology that can dramatically reduce the time required for neural network learning is rapidly increasing. In this study, a real-time deep learning system was implemented using an Arduino system equipped with a fine dust sensor. In the implemented system, fine dust data is measured every 30 seconds, and when up to 120 are accumulated, learning is performed using the previously accumulated data and the newly accumulated data as a dataset. The neural network for learning was composed of one input layer, one hidden layer, and one output. To evaluate the performance of the implemented system, learning time and root mean square error (RMSE) were measured. As a result of the experiment, the average learning error was 0.04053796, and the average learning time of one epoch was about 3,447 seconds.

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

Improving Wind Speed Forecasts Using Deep Neural Network

  • Hong, Seokmin;Ku, SungKwan
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.327-333
    • /
    • 2019
  • Wind speed data constitute important weather information for aircrafts flying at low altitudes, such as drones. Currently, the accuracy of low altitude wind predictions is much lower than that of high-altitude wind predictions. Deep neural networks are proposed in this study as a method to improve wind speed forecast information. Deep neural networks mimic the learning process of the interactions among neurons in the brain, and it is used in various fields, such as recognition of image, sound, and texts, image and natural language processing, and pattern recognition in time-series. In this study, the deep neural network model is constructed using the wind prediction values generated by the numerical model as an input to improve the wind speed forecasts. Using the ground wind speed forecast data collected at the Boseong Meteorological Observation Tower, wind speed forecast values obtained by the numerical model are compared with those obtained by the model proposed in this study for the verification of the validity and compatibility of the proposed model.

Image-based Artificial Intelligence Deep Learning to Protect the Big Data from Malware (악성코드로부터 빅데이터를 보호하기 위한 이미지 기반의 인공지능 딥러닝 기법)

  • Kim, Hae Jung;Yoon, Eun Jun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.76-82
    • /
    • 2017
  • Malware, including ransomware to quickly detect, in this study, to provide an analysis method of malicious code through the image analysis that has been learned in the deep learning of artificial intelligence. First, to analyze the 2,400 malware data, and learning in artificial neural network Convolutional neural network and to image data. Extracts subgraphs to convert the graph of abstracted image, summarizes the set represent malware. The experimentally analyzed the malware is not how similar. Using deep learning of artificial intelligence by classifying malware and It shows the possibility of accurate malware detection.

A study on estimating the main dimensions of a small fishing boat using deep learning (딥러닝을 이용한 연안 소형 어선 주요 치수 추정 연구)

  • JANG, Min Sung;KIM, Dong-Joon;ZHAO, Yang
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.58 no.3
    • /
    • pp.272-280
    • /
    • 2022
  • The first step is to determine the principal dimensions of the design ship, such as length between perpendiculars, beam, draft and depth when accomplishing the design of a new vessel. To make this process easier, a database with a large amount of existing ship data and a regression analysis technique are needed. Recently, deep learning, a branch of artificial intelligence (AI) has been used in regression analysis. In this paper, deep learning neural networks are used for regression analysis to find the regression function between the input and output data. To find the neural network structure with the highest accuracy, the errors of neural network structures with varying the number of the layers and the nodes are compared. In this paper, Python TensorFlow Keras API and MATLAB Deep Learning Toolbox are used to build deep learning neural networks. Constructed DNN (deep neural networks) makes helpful in determining the principal dimension of the ship and saves much time in the ship design process.

An Implementation of a Convolutional Accelerator based on a GPGPU for a Deep Learning (Deep Learning을 위한 GPGPU 기반 Convolution 가속기 구현)

  • Jeon, Hee-Kyeong;Lee, Kwang-yeob;Kim, Chi-yong
    • Journal of IKEEE
    • /
    • v.20 no.3
    • /
    • pp.303-306
    • /
    • 2016
  • In this paper, we propose a method to accelerate convolutional neural network by utilizing a GPGPU. Convolutional neural network is a sort of the neural network learning features of images. Convolutional neural network is suitable for the image processing required to learn a lot of data such as images. The convolutional layer of the conventional CNN required a large number of multiplications and it is difficult to operate in the real-time on the embedded environment. In this paper, we reduce the number of multiplications through Winograd convolution operation and perform parallel processing of the convolution by utilizing SIMT-based GPGPU. The experiment was conducted using ModelSim and TestDrive, and the experimental results showed that the processing time was improved by about 17%, compared to the conventional convolution.

Neural Network Model Compression Algorithms for Image Classification in Embedded Systems (임베디드 시스템에서의 객체 분류를 위한 인공 신경망 경량화 연구)

  • Shin, Heejung;Oh, Hyondong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.133-141
    • /
    • 2022
  • This paper introduces model compression algorithms which make a deep neural network smaller and faster for embedded systems. The model compression algorithms can be largely categorized into pruning, quantization and knowledge distillation. In this study, gradual pruning, quantization aware training, and knowledge distillation which learns the activation boundary in the hidden layer of the teacher neural network are integrated. As a large deep neural network is compressed and accelerated by these algorithms, embedded computing boards can run the deep neural network much faster with less memory usage while preserving the reasonable accuracy. To evaluate the performance of the compressed neural networks, we evaluate the size, latency and accuracy of the deep neural network, DenseNet201, for image classification with CIFAR-10 dataset on the NVIDIA Jetson Xavier.

GRADIENTS IN A DEEP NEURAL NETWORK AND THEIR PYTHON IMPLEMENTATIONS

  • Park, Young Ho
    • Korean Journal of Mathematics
    • /
    • v.30 no.1
    • /
    • pp.131-146
    • /
    • 2022
  • This is an expository article about the gradients in deep neural network. It is hard to find a place where gradients in a deep neural network are dealt in details in a systematic and mathematical way. We review and compute the gradients and Jacobians to derive formulas for gradients which appear in the backpropagation and implement them in vectorized forms in Python.