• Title/Summary/Keyword: Gradient descent

Search Result 339, Processing Time 0.039 seconds

A Study on Wavelet Neural Network Based Generalized Predictive Control for Path Tracking of Mobile Robots (이동 로봇의 경로 추종을 위한 웨이블릿 신경 회로망 기반 일반형 예측 제어에 관한 연구)

  • Song, Yong-Tae;Oh, Joon-Seop;Park, Jin-Bae;Choi, Yoon-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.457-466
    • /
    • 2005
  • In this paper, we propose a wavelet neural network(WNN) based predictive control method for path tracking of mobile robots with multi-input and multi-output. In our control method, we use a WNN as a state predictor which combines the capability of artificial neural networks in learning processes and the capability of wavelet decomposition. A WNN predictor is tuned to minimize errors between the WNN outputs and the states of mobile robot using the gradient descent rule. And control signals, linear velocity and angular velocity, are calculated to minimize the predefined cost function using errors between the reference states and the predicted states. Through a computer simulation for the tracking performance according to varied track, we demonstrate the efficiency and the feasibility of our predictive control system.

Ambient dose equivalent measurement with a CsI(Tl) based electronic personal dosimeter

  • Park, Kyeongjin;Kim, Jinhwan;Lim, Kyung Taek;Kim, Junhyeok;Chang, Hojong;Kim, Hyunduk;Sharma, Manish;Cho, Gyuseong
    • Nuclear Engineering and Technology
    • /
    • v.51 no.8
    • /
    • pp.1991-1997
    • /
    • 2019
  • In this manuscript, we present a method for the direct calculation of an ambient dose equivalent (H* (10)) for the external gamma-ray exposure with an energy range of 40 keV to 2 MeV in an electronic personal dosimeter (EPD). The designed EPD consists of a 3 × 3 ㎟ PIN diode coupled to a 3 × 3 × 3 ㎣ CsI (Tl) scintillator block. The spectrum-to-dose conversion function (G(E)) for estimating H* (10) was calculated by applying the gradient-descent method based on the Monte-Carlo simulation. The optimal parameters for the G(E) were found and this conversion of the H* (10) from the gamma spectra was verified by using 241Am, 137Cs, 22Na, 54Mn, and 60Co radioisotopes. Furthermore, gamma spectra and H* (10) were obtained for an arbitrarily mixed multiple isotope case through Monte-Carlo simulation in order to expand the verification to more general cases. The H* (10) based on the G(E) function for the gamma spectra was then compared with H* (10) calculated by simulation. The relative difference of H* (10) from various single-source spectra was in the range of ±2.89%, and the relative difference of H* (10) for a multiple isotope case was in the range of ±5.56%.

A Study on the Improvement of Fault Detection Capability for Fault Indicator using Fuzzy Clustering and Neural Network (퍼지클러스터링 기법과 신경회로망을 이용한 고장표시기의 고장검출 능력 개선에 관한 연구)

  • Hong, Dae-Seung;Yim, Hwa-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.374-379
    • /
    • 2007
  • This paper focuses on the improvement of fault detection algorithm in FRTU(feeder remote terminal unit) on the feeder of distribution power system. FRTU is applied to fault detection schemes for phase fault and ground fault. Especially, cold load pickup and inrush restraint functions distinguish the fault current from the normal load current. FRTU shows FI(Fault Indicator) when the fault current is over pickup value or inrush current. STFT(Short Time Fourier Transform) analysis provides the frequency and time Information. FCM(Fuzzy C-Mean clustering) algorithm extracts characteristics of harmonics. The neural network system as a fault detector was trained to distinguish the inruih current from the fault status by a gradient descent method. In this paper, fault detection is improved by using FCM and neural network. The result data were measured in actual 22.9kV distribution power system.

Design of Face Recognition algorithm Using PCA&LDA combined for Data Pre-Processing and Polynomial-based RBF Neural Networks (PCA와 LDA를 결합한 데이터 전 처리와 다항식 기반 RBFNNs을 이용한 얼굴 인식 알고리즘 설계)

  • Oh, Sung-Kwun;Yoo, Sung-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.5
    • /
    • pp.744-752
    • /
    • 2012
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as an one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problems. In data preprocessing part, Principal Component Analysis(PCA) which is generally used in face recognition, which is useful to express some classes using reduction, since it is effective to maintain the rate of recognition and to reduce the amount of data at the same time. However, because of there of the whole face image, it can not guarantee the detection rate about the change of viewpoint and whole image. Thus, to compensate for the defects, Linear Discriminant Analysis(LDA) is used to enhance the separation of different classes. In this paper, we combine the PCA&LDA algorithm and design the optimized pRBFNNs for recognition module. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as two kinds of polynomials such as constant, and linear. The coefficients of connection weight identified with back-propagation using gradient descent method. The output of the pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of Differential Evolution. The proposed pRBFNNs are applied to face image(ex Yale, AT&T) datasets and then demonstrated from the viewpoint of the output performance and recognition rate.

Development of Real-time Rainfall Sensor Rainfall Estimation Technique using Optima Rainfall Intensity Technique (Optima Rainfall Intensity 기법을 이용한 실시간 강우센서 강우 산정기법 개발)

  • Lee, Byung Hun;Hwang, Sung Jin;Kim, Byung Sik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.429-429
    • /
    • 2019
  • 최근 들어 이상기후 등 다양한 환경적 요인으로 인해 국지적이고 집중적인 호우가 빈발하고 있으며 도로상의 교통체증과 도로재해가 사회적으로 큰 문제가 되고 있다. 이러한 문제를 해결하기 위해서는 실시간, 단기간 이동성 강우정보 기술과 도로 기상정보를 활용할 수 있는 방법에 대한 연구가 필요하다. 본 연구는 차량의 AW(AutoWiping) 기능을 위해 장착된 강우센서를 이용하여 강우정보를 생산하는 기술을 개발하고자 하였다. 강우센서는 총 4개의 채널로 이루어져있고, 초당 250개의 광신호 데이터를 수집하며, 1시간이면 약 360만 개의 데이터가 생산되게 된다. 5단계의 인공강우를 재현하여 실내 인공강우실험을 실시하고 이를 통해 강우센서 데이터와 강우량과의 상관성을 W-S-R관계식으로 정의하였다. 실내실험데이터와 비교하여 외부환경 및 데이터 생성조건이 다른 실외 데이터의 누적값을 계산하기 위해 Threshold Map 방식을 개발하였다. 강우센서에서 생산되는 대량의 데이터를 이용하여 실시간으로 정확한 강우정보를 생산하기 위해 빅 데이터 처리기법을 사용하여 계산된 실내 데이터의 Threshold를 강우강도 및 채널에 따라 평균값을 계산하고 $4{\times}5$ Threshold Map(4 = 채널, 5 = 강우정보 사상)을 생성하였고 강우센서 기반의 강우정보 생산에 적합한 빅데이터 처리기법을 선정하기 위하여 빅데이터 처리기법 중 Gradient Descent와 Optima Rainfall Intensity을 적용하여 분석하고 결과를 지상 관측강우와 비교검증을 하였다. 이 결과 Optima Rainfall Intensity의 적합도를 검증하였고 실시간으로 관측한 8개 강우사상을 대상으로 강우센서 강우를 생산하였다.

  • PDF

Movie Recommendation System based on Latent Factor Model (잠재요인 모델 기반 영화 추천 시스템)

  • Ma, Chen;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.125-134
    • /
    • 2021
  • With the rapid development of the film industry, the number of films is significantly increasing and movie recommendation system can help user to predict the preferences of users based on their past behavior or feedback. This paper proposes a movie recommendation system based on the latent factor model with the adjustment of mean and bias in rating. Singular value decomposition is used to decompose the rating matrix and stochastic gradient descent is used to optimize the parameters for least-square loss function. And root mean square error is used to evaluate the performance of the proposed system. We implement the proposed system with Surprise package. The simulation results shows that root mean square error is 0.671 and the proposed system has good performance compared to other papers.

Sparse and low-rank feature selection for multi-label learning

  • Lim, Hyunki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we propose a feature selection technique for multi-label classification. Many existing feature selection techniques have selected features by calculating the relation between features and labels such as a mutual information scale. However, since the mutual information measure requires a joint probability, it is difficult to calculate the joint probability from an actual premise feature set. Therefore, it has the disadvantage that only a few features can be calculated and only local optimization is possible. Away from this regional optimization problem, we propose a feature selection technique that constructs a low-rank space in the entire given feature space and selects features with sparsity. To this end, we designed a regression-based objective function using Nuclear norm, and proposed an algorithm of gradient descent method to solve the optimization problem of this objective function. Based on the results of multi-label classification experiments on four data and three multi-label classification performance, the proposed methodology showed better performance than the existing feature selection technique. In addition, it was showed by experimental results that the performance change is insensitive even to the parameter value change of the proposed objective function.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Intelligent & Predictive Security Deployment in IOT Environments

  • Abdul ghani, ansari;Irfana, Memon;Fayyaz, Ahmed;Majid Hussain, Memon;Kelash, Kanwar;fareed, Jokhio
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.185-196
    • /
    • 2022
  • The Internet of Things (IoT) has become more and more widespread in recent years, thus attackers are placing greater emphasis on IoT environments. The IoT connects a large number of smart devices via wired and wireless networks that incorporate sensors or actuators in order to produce and share meaningful information. Attackers employed IoT devices as bots to assault the target server; however, because of their resource limitations, these devices are easily infected with IoT malware. The Distributed Denial of Service (DDoS) is one of the many security problems that might arise in an IoT context. DDOS attempt involves flooding a target server with irrelevant requests in an effort to disrupt it fully or partially. This worst practice blocks the legitimate user requests from being processed. We explored an intelligent intrusion detection system (IIDS) using a particular sort of machine learning, such as Artificial Neural Networks, (ANN) in order to handle and mitigate this type of cyber-attacks. In this research paper Feed-Forward Neural Network (FNN) is tested for detecting the DDOS attacks using a modified version of the KDD Cup 99 dataset. The aim of this paper is to determine the performance of the most effective and efficient Back-propagation algorithms among several algorithms and check the potential capability of ANN- based network model as a classifier to counteract the cyber-attacks in IoT environments. We have found that except Gradient Descent with Momentum Algorithm, the success rate obtained by the other three optimized and effective Back- Propagation algorithms is above 99.00%. The experimental findings showed that the accuracy rate of the proposed method using ANN is satisfactory.

Analysis of methods for the model extraction without training data (학습 데이터가 없는 모델 탈취 방법에 대한 분석)

  • Hyun Kwon;Yonggi Kim;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.57-64
    • /
    • 2023
  • In this study, we analyzed how to steal the target model without training data. Input data is generated using the generative model, and a similar model is created by defining a loss function so that the predicted values of the target model and the similar model are close to each other. At this time, the target model has a process of learning so that the similar model is similar to it by gradient descent using the logit (logic) value of each class for the input data. The tensorflow machine learning library was used as an experimental environment, and CIFAR10 and SVHN were used as datasets. A similar model was created using the ResNet model as a target model. As a result of the experiment, it was found that the model stealing method generated a similar model with an accuracy of 86.18% for CIFAR10 and 96.02% for SVHN, producing similar predicted values to the target model. In addition, considerations on the model stealing method, military use, and limitations were also analyzed.