• Title/Summary/Keyword: gradient descent optimization

Search Result 82, Processing Time 0.023 seconds

Fraud Detection in E-Commerce

  • Alqethami, Sara;Almutanni, Badriah;AlGhamdi, Manal
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.200-206
    • /
    • 2021
  • Fraud in e-commerce transaction increased in the last decade especially with the increasing number of online stores and the lockdown that forced more people to pay for services and groceries online using their credit card. Several machine learning methods were proposed to detect fraudulent transaction. Neural networks showed promising results, but it has some few drawbacks that can be overcome using optimization methods. There are two categories of learning optimization methods, first-order methods which utilizes gradient information to construct the next training iteration whereas, and second-order methods which derivatives use Hessian to calculate the iteration based on the optimization trajectory. There also some training refinements procedures that aims to potentially enhance the original accuracy while possibly reduce the model size. This paper investigate the performance of several NN models in detecting fraud in e-commerce transaction. The backpropagation model which is classified as first learning algorithm achieved the best accuracy 96% among all the models.

SHADOWING PROPERTY FOR ADMM FLOWS

  • Yoon Mo Jung;Bomi Shin;Sangwoon Yun
    • Journal of the Korean Mathematical Society
    • /
    • v.61 no.2
    • /
    • pp.395-408
    • /
    • 2024
  • There have been numerous studies on the characteristics of the solutions of ordinary differential equations for optimization methods, including gradient descent methods and alternating direction methods of multipliers. To investigate computer simulation of ODE solutions, we need to trace pseudo-orbits by real orbits and it is called shadowing property in dynamics. In this paper, we demonstrate that the flow induced by the alternating direction methods of multipliers (ADMM) for a C2 strongly convex objective function has the eventual shadowing property. For the converse, we partially answer that convexity with the eventual shadowing property guarantees a unique minimizer. In contrast, we show that the flow generated by a second-order ODE, which is related to the accelerated version of ADMM, does not have the eventual shadowing property.

Comparison of Different Deep Learning Optimizers for Modeling Photovoltaic Power

  • Poudel, Prasis;Bae, Sang Hyun;Jang, Bongseog
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.204-208
    • /
    • 2018
  • Comparison of different optimizer performance in photovoltaic power modeling using artificial neural deep learning techniques is described in this paper. Six different deep learning optimizers are tested for Long-Short-Term Memory networks in this study. The optimizers are namely Adam, Stochastic Gradient Descent, Root Mean Square Propagation, Adaptive Gradient, and some variants such as Adamax and Nadam. For comparing the optimization techniques, high and low fluctuated photovoltaic power output are examined and the power output is real data obtained from the site at Mokpo university. Using Python Keras version, we have developed the prediction program for the performance evaluation of the optimizations. The prediction error results of each optimizer in both high and low power cases shows that the Adam has better performance compared to the other optimizers.

Study on the Effective Compensation of Quantization Error for Machine Learning in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 효율적인 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.157-165
    • /
    • 2020
  • In this paper. we propose an effective compensation scheme to the quantization error arisen from quantized learning in a machine learning on an embedded system. In the machine learning based on a gradient descent or nonlinear signal processing, the quantization error generates early vanishing of a gradient and occurs the degradation of learning performance. To compensate such quantization error, we derive an orthogonal compensation vector with respect to a maximum component of the gradient vector. Moreover, instead of the conventional constant learning rate, we propose the adaptive learning rate algorithm without any inner loop to select the step size, based on a nonlinear optimization technique. The simulation results show that the optimization solver based on the proposed quantized method represents sufficient learning performance.

Digital signal change through artificial intelligence machine learning method comparison and learning (인공지능 기계학습 방법 비교와 학습을 통한 디지털 신호변화)

  • Yi, Dokkyun;Park, Jieun
    • Journal of Digital Convergence
    • /
    • v.17 no.10
    • /
    • pp.251-258
    • /
    • 2019
  • In the future, various products are created in various fields using artificial intelligence. In this age, it is a very important problem to know the operation principle of artificial intelligence learning method and to use it correctly. This paper introduces artificial intelligence learning methods that have been known so far. Learning of artificial intelligence is based on the fixed point iteration method of mathematics. The GD(Gradient Descent) method, which adjusts the convergence speed based on the fixed point iteration method, the Momentum method to summate the amount of gradient, and finally, the Adam method that mixed these methods. This paper describes the advantages and disadvantages of each method. In particularly, the Adam method having adaptivity controls learning ability of machine learning. And we analyze how these methods affect digital signals. The changes in the learning process of digital signals are the basis of accurate application and accurate judgment in the future work and research using artificial intelligence.

Interative Feedback Tuning for Positive Feedback Time Delay Controller

  • Tsang Kai-Ming;Rad Ahmad B.;Chan Wai-Lok
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.4
    • /
    • pp.640-645
    • /
    • 2005
  • Closed-loop model-free optimization of positive feedback time delay controllers for dominant time delay systems is presented. Iterative feedback tuning (IFT) is applied to the tuning of positive feedback time delay controller. Three experiments are carried out to perform the model-free gradient descent optimization. The initial controller parameters and duration in specifying the cost function are suggested. The effects of step size, filter function and time weighting function on the performance of the optimized controlled are given. Simulation and experimental studies are included to demonstrate the effectiveness of the tuning scheme.

A new training method of multilayer neural networks using a hybrid of backpropagation algorithm and dynamic tunneling system (후향전파 알고리즘과 동적터널링 시스템을 조합한 다층신경망의 새로운 학습방법)

  • 조용현
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.201-208
    • /
    • 1996
  • This paper proposes an efficient method for improving the training performance of the neural network using a hybrid of backpropagation algorithm and dynamic tunneling system.The backpropagation algorithm, which is the fast gradient descent method, is applied for high-speed optimization. The dynamic tunneling system, which is the deterministic method iwth a tunneling phenomenone, is applied for blobal optimization. Converging to the local minima by using the backpropagation algorithm, the approximate initial point for escaping the local minima is estimated by the pattern classification, and the simulation results show that the performance of proposed method is superior th that of backpropagation algorithm with randomized initial point settings.

  • PDF

An analysis of discursive constructs of AI-based mathematical objects used in the optimization content of AI mathematics textbooks (인공지능 수학교과서의 최적화 내용에서 사용하는 인공지능 기반 수학적 대상들에 대한 담론적 구성 분석)

  • Young-Seok Oh;Dong-Joong Kim
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.319-334
    • /
    • 2024
  • The purpose of this study was to reveal the discursive constructs of AI-based mathematical objects by analyzing how concrete objects used in the optimization content of AI mathematics textbooks are transformed into discursive objects through naming and discursive operation. For this purpose, we extracted concrete objects used in the optimization contents of five high school AI mathematics textbooks and developed a framework for analyzing the discursive constructs and discursive operations of AI-based mathematical objects that can analyze discursive objects. The results of the study showed that there are a total of 15 concrete objects used in the loss function and gradient descent sections of the optimization content, and one concrete object that emerges as an abstract d-object through naming and discursive operation. The findings of this study are not only significant in that they flesh out the discursive construction of AI-based mathematical objects in terms of the written curriculum and provide practical suggestions for students to develop AI-based mathematical discourse in an exploratory way, but also provide implications for the development of effective discursive construction processes and curricula for AI-based mathematical objects.

A New Design of Signal Constellation of the Spiral Quadrature Amplitude Modulation (나선 직교진폭변조 신호성상도의 새로운 설계)

  • Li, Shuang;Kang, Seog Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.398-404
    • /
    • 2020
  • In this paper, we propose a new design method of signal constellation of the spiral quadrature amplitude modulation (QAM) exploiting a modified gradient descent search algorithm and its binary mapping rule. Unlike the conventional method, the new method, which uses and the constellation optimization algorithm and the maximum number of iterations as a parameter for the iterative design, is more robust to phase noise. And the proposed binary mapping rule significantly reduces the average Hamming distance of the spiral constellation. As a result, the proposed spiral QAM constellation has much improved error performance compared to the conventional ones even in a very severe phase noise environment. It is, therefore, considered that the proposed QAM may be a useful modulation format for coherent optical communication systems and orthogonal frequency division multiplexing (OFDM) systems.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.