• Title/Summary/Keyword: neural network learning

Search Result 4,098, Processing Time 0.031 seconds

Supervised learning-based DDoS attacks detection: Tuning hyperparameters

  • Kim, Meejoung
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.560-573
    • /
    • 2019
  • Two supervised learning algorithms, a basic neural network and a long short-term memory recurrent neural network, are applied to traffic including DDoS attacks. The joint effects of preprocessing methods and hyperparameters for machine learning on performance are investigated. Values representing attack characteristics are extracted from datasets and preprocessed by two methods. Binary classification and two optimizers are used. Some hyperparameters are obtained exhaustively for fast and accurate detection, while others are fixed with constants to account for performance and data characteristics. An experiment is performed via TensorFlow on three traffic datasets. Three scenarios are considered to investigate the effects of learning former traffic on sequential traffic analysis and the effects of learning one dataset on application to another dataset, and determine whether the algorithms can be used for recent attack traffic. Experimental results show that the used preprocessing methods, neural network architectures and hyperparameters, and the optimizers are appropriate for DDoS attack detection. The obtained results provide a criterion for the detection accuracy of attacks.

Deep Learning Method for Identification and Selection of Relevant Features

  • Vejendla Lakshman
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.212-216
    • /
    • 2024
  • Feature Selection have turned into the main point of investigations particularly in bioinformatics where there are numerous applications. Deep learning technique is a useful asset to choose features, anyway not all calculations are on an equivalent balance with regards to selection of relevant features. To be sure, numerous techniques have been proposed to select multiple features using deep learning techniques. Because of the deep learning, neural systems have profited a gigantic top recovery in the previous couple of years. Anyway neural systems are blackbox models and not many endeavors have been made so as to examine the fundamental procedure. In this proposed work a new calculations so as to do feature selection with deep learning systems is introduced. To evaluate our outcomes, we create relapse and grouping issues which enable us to think about every calculation on various fronts: exhibitions, calculation time and limitations. The outcomes acquired are truly encouraging since we figure out how to accomplish our objective by outperforming irregular backwoods exhibitions for each situation. The results prove that the proposed method exhibits better performance than the traditional methods.

A Study on the Gender and Age Classification of Speech Data Using CNN (CNN을 이용한 음성 데이터 성별 및 연령 분류 기술 연구)

  • Park, Dae-Seo;Bang, Joon-Il;Kim, Hwa-Jong;Ko, Young-Jun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.11-21
    • /
    • 2018
  • Research is carried out to categorize voices using Deep Learning technology. The study examines neural network-based sound classification studies and suggests improved neural networks for voice classification. Related studies studied urban data classification. However, related studies showed poor performance in shallow neural network. Therefore, in this paper the first preprocess voice data and extract feature value. Next, Categorize the voice by entering the feature value into previous sound classification network and proposed neural network. Finally, compare and evaluate classification performance of the two neural networks. The neural network of this paper is organized deeper and wider so that learning is better done. Performance results showed that 84.8 percent of related studies neural networks and 91.4 percent of the proposed neural networks. The proposed neural network was about 6 percent high.

APPLICATION OF COULOMB ENERGY NETWORK TO KOREAN RECOGNITION (Coulomb Energy Network를 이용한 한글인식 Neural Network)

  • Lee, Kyung-Hee;Lee, Won-Don
    • Annual Conference on Human and Language Technology
    • /
    • 1989.10a
    • /
    • pp.267-271
    • /
    • 1989
  • 최근 Scofield는 coulomb energy network에 적용할 수 있는 learning algorithm(supervised learning algorithm)을 제안하였다. 이 learning algorithm은 multi-layer network에도 쉽게 적용이 가능하고 한 layer 에서 발생한 error가 다른 layer에 영향을 주지 않아서 system을 modular하게 구성할 수가 있으며 각 layer를 독립적으로 learning 시킬 수 있는 특징이 있다. 본 논문에서는 coulomb energy network를 이용하여 한글인식을 위한 neural network를 구현하여 인식실험을 한 결과와 구현한 network 에서 인식율을 높이기 위한 방안 (2 stage learning) 을 제시한다.

  • PDF

Face Recognition Based on Improved Fuzzy RBF Neural Network for Smar t Device

  • Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1338-1347
    • /
    • 2013
  • Face recognition is a science of automatically identifying individuals based their unique facial features. In order to avoid overfitting and reduce the computational reduce the computational burden, a new face recognition algorithm using PCA-fisher linear discriminant (PCA-FLD) and fuzzy radial basis function neural network (RBFNN) is proposed in this paper. First, face features are extracted by the principal component analysis (PCA) method. Then, the extracted features are further processed by the Fisher's linear discriminant technique to acquire lower-dimensional discriminant patterns, the processed features will be considered as the input of the fuzzy RBFNN. As a widely applied algorithm in fuzzy RBF neural network, BP learning algorithm has the low rate of convergence, therefore, an improved learning algorithm based on Levenberg-Marquart (L-M) for fuzzy RBF neural network is introduced in this paper, which combined the Gradient Descent algorithm with the Gauss-Newton algorithm. Experimental results on the ORL face database demonstrate that the proposed algorithm has satisfactory performance and high recognition rate.

A learning control of DC servomotor using neural network

  • Kawabata, Hiroaki;Yamada, Katsuhisa;Zhong, Zhang;Takeda, Yoji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.703-707
    • /
    • 1994
  • This paper proposes a method of learning control in DC servomotor using a neural network. First we estimate the pulse transfer function of the servo system with an unknown load, then we determine the best gains of I-PD control system using a neural network. Each time the load changes, its best gains of the I-PD control system is computed by the neural network. And the best gains and its pulse transfer function for the case are stored in the memory. According the increase of the set of gains and its pulse transfer function, the learning control system can afford the most suitable I-PD gains instantly.

  • PDF

Control of the robot manipulators using fuzzy-neural network (퍼지 신경망을 이용한 로보트 매니퓰레이터 제어)

  • 김성현;김용호;심귀보;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.436-440
    • /
    • 1992
  • As an approach to design the intelligent controller, this paper proposes a new FNN(Fuzzy Neural Network) control method using the hybrid combination of fuzzy logic control and neural network. The proposed FNN controller has two important capabilities, namely, adaptation and learning. These functions are performed by the following process. Firstly, identification of the parameters and estimation of the states for the unknown plant are achieved by the MNN(Model Neural Network) which is continuously trained on-line. And secondly, the learning is performed by FNN controller. The error back propagation algorithm is adopted as a learning technique. The effectiveness of the proposed method will be demonstrated by computer simulation of a two d.o.f. robot manipulator.

  • PDF

ART1 Neural Network for the Detection of Tool Breakage (공구파단 검출을 위한 ART2 신경회로망)

  • 고태조;김희술;조동우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.04b
    • /
    • pp.451-456
    • /
    • 1995
  • This study investigates the feasibility of the real time detection of tool breadage in face milling operation. The proposed methodology using an ART2 neural network overcomes a cumbersome task in terms of the learning or determining a threshold value. The features taken in the researchare the AR parameters modelled from a RLS, and those are proven to be good features for tool breakage from experiments. From the results of the off line application, we can conclude that an ART2 neural network can be well applied to the clustering of tool states in real time regardless of the unsupervised learning.

  • PDF

Stable Predictive Control of Chaotic Systems Using Self-Recurrent Wavelet Neural Network

  • Yoo Sung Jin;Park Jin Bae;Choi Yoon Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.43-55
    • /
    • 2005
  • In this paper, a predictive control method using self-recurrent wavelet neural network (SRWNN) is proposed for chaotic systems. Since the SRWNN has a self-recurrent mother wavelet layer, it can well attract the complex nonlinear system though the SRWNN has less mother wavelet nodes than the wavelet neural network (WNN). Thus, the SRWNN is used as a model predictor for predicting the dynamic property of chaotic systems. The gradient descent method with the adaptive learning rates is applied to train the parameters of the SRWNN based predictor and controller. The adaptive learning rates are derived from the discrete Lyapunov stability theorem, which are used to guarantee the convergence of the predictive controller. Finally, the chaotic systems are provided to demonstrate the effectiveness of the proposed control strategy.

A Study on Compression of Connections in Deep Artificial Neural Networks (인공신경망의 연결압축에 대한 연구)

  • Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.5
    • /
    • pp.17-24
    • /
    • 2017
  • Recently Deep-learning, Technologies using Large or Deep Artificial Neural Networks, have Shown Remarkable Performance, and the Increasing Size of the Network Contributes to its Performance Improvement. However, the Increase in the Size of the Neural Network Leads to an Increase in the Calculation Amount, which Causes Problems Such as Circuit Complexity, Price, Heat Generation, and Real-time Restriction. In This Paper, We Propose and Test a Method to Reduce the Number of Network Connections by Effectively Pruning the Redundancy in the Connection and Showing the Difference between the Performance and the Desired Range of the Original Neural Network. In Particular, we Proposed a Simple Method to Improve the Performance by Re-learning and to Guarantee the Desired Performance by Allocating the Error Rate per Layer in Order to Consider the Difference of each Layer. Experiments have been Performed on a Typical Neural Network Structure such as FCN (full connection network) and CNN (convolution neural network) Structure and Confirmed that the Performance Similar to that of the Original Neural Network can be Obtained by Only about 1/10 Connection.