• Title/Summary/Keyword: Learning Speed

Search Result 1,174, Processing Time 0.037 seconds

A Case Study of Rapid AI Service Deployment - Iris Classification System

  • Yonghee LEE
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.4
    • /
    • pp.29-34
    • /
    • 2023
  • The flow from developing a machine learning model to deploying it in a production environment suffers challenges. Efficient and reliable deployment is critical for realizing the true value of machine learning models. Bridging this gap between development and publication has become a pivotal concern in the machine learning community. FastAPI, a modern and fast web framework for building APIs with Python, has gained substantial popularity for its speed, ease of use, and asynchronous capabilities. This paper focused on leveraging FastAPI for deploying machine learning models, addressing the potentials associated with integration, scalability, and performance in a production setting. In this work, we explored the seamless integration of machine learning models into FastAPI applications, enabling real-time predictions and showing a possibility of scaling up for a more diverse range of use cases. We discussed the intricacies of integrating popular machine learning frameworks with FastAPI, ensuring smooth interactions between data processing, model inference, and API responses. This study focused on elucidating the integration of machine learning models into production environments using FastAPI, exploring its capabilities, features, and best practices. We delved into the potential of FastAPI in providing a robust and efficient solution for deploying machine learning systems, handling real-time predictions, managing input/output data, and ensuring optimal performance and reliability.

Implementation of Intelligent Agent Based on Reinforcement Learning Using Unity ML-Agents (유니티 ML-Agents를 이용한 강화 학습 기반의 지능형 에이전트 구현)

  • Young-Ho Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.205-211
    • /
    • 2024
  • The purpose of this study is to implement an agent that intelligently performs tracking and movement through reinforcement learning using the Unity and ML-Agents. In this study, we conducted an experiment to compare the learning performance between training one agent in a single learning simulation environment and parallel training of several agents simultaneously in a multi-learning simulation environment. From the experimental results, we could be confirmed that the parallel training method is about 4.9 times faster than the single training method in terms of learning speed, and more stable and effective learning occurs in terms of learning stability.

An Improved Domain-Knowledge-based Reinforcement Learning Algorithm

  • Jang, Si-Young;Suh, Il-Hong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1309-1314
    • /
    • 2003
  • If an agent has a learning ability using previous knowledge, then it is expected that the agent can speed up learning by interacting with environment. In this paper, we present an improved reinforcement learning algorithm using domain knowledge which can be represented by problem-independent features and their classifiers. Here, neural networks are employed as knowledge classifiers. To show the validity of our proposed algorithm, computer simulations are illustrated, where navigation problem of a mobile robot and a micro aerial vehicle(MAV) are considered.

  • PDF

ACCELERATION OF MACHINE LEARNING ALGORITHMS BY TCHEBYCHEV ITERATION TECHNIQUE

  • LEVIN, MIKHAIL P.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.1
    • /
    • pp.15-28
    • /
    • 2018
  • Recently Machine Learning algorithms are widely used to process Big Data in various applications and a lot of these applications are executed in run time. Therefore the speed of Machine Learning algorithms is a critical issue in these applications. However the most of modern iteration Machine Learning algorithms use a successive iteration technique well-known in Numerical Linear Algebra. But this technique has a very low convergence, needs a lot of iterations to get solution of considering problems and therefore a lot of time for processing even on modern multi-core computers and clusters. Tchebychev iteration technique is well-known in Numerical Linear Algebra as an attractive candidate to decrease the number of iterations in Machine Learning iteration algorithms and also to decrease the running time of these algorithms those is very important especially in run time applications. In this paper we consider the usage of Tchebychev iterations for acceleration of well-known K-Means and SVM (Support Vector Machine) clustering algorithms in Machine Leaning. Some examples of usage of our approach on modern multi-core computers under Apache Spark framework will be considered and discussed.

Face recognition Based on Super-resolution Method Using Sparse Representation and Deep Learning (희소표현법과 딥러닝을 이용한 초고해상도 기반의 얼굴 인식)

  • Kwon, Ohseol
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.173-180
    • /
    • 2018
  • This paper proposes a method to improve the performance of face recognition via super-resolution method using sparse representation and deep learning from low-resolution facial images. Recently, there have been many researches on ultra-high-resolution images using deep learning techniques, but studies are still under way in real-time face recognition. In this paper, we combine the sparse representation and deep learning to generate super-resolution images to improve the performance of face recognition. We have also improved the processing speed by designing in parallel structure when applying sparse representation. Finally, experimental results show that the proposed method is superior to conventional methods on various images.

A Study on the Learning Efficiency of Multilayered Neural Networks using Variable Slope (기울기 조정에 의한 다층 신경회로망의 학습효율 개선방법에 대한 연구)

  • 이형일;남재현;지선수
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.42
    • /
    • pp.161-169
    • /
    • 1997
  • A variety of learning methods are used for neural networks. Among them, the backpropagation algorithm is most widely used in such image processing, speech recognition, and pattern recognition. Despite its popularity for these application, its main problem is associated with the running time, namely, too much time is spent for the learning. This paper suggests a method which maximize the convergence speed of the learning. Such reduction in e learning time of the backpropagation algorithm is possible through an adaptive adjusting of the slope of the activation function depending on total errors, which is named as the variable slope algorithm. Moreover experimental results using this variable slope algorithm is compared against conventional backpropagation algorithm and other variations; which shows an improvement in the performance over pervious algorithms.

  • PDF

Torque Ripple Minimization of PMSM Using Parameter Optimization Based Iterative Learning Control

  • Xia, Changliang;Deng, Weitao;Shi, Tingna;Yan, Yan
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.425-436
    • /
    • 2016
  • In this paper, a parameter optimization based iterative learning control strategy is presented for permanent magnet synchronous motor control. This paper analyzes the mechanism of iterative learning control suppressing PMSM torque ripple and discusses the impact of controller parameters on steady-state and dynamic performance of the system. Based on the analysis, an optimization problem is constructed, and the expression of the optimal controller parameter is obtained to adjust the controller parameter online. Experimental research is carried out on a 5.2kW PMSM. The results show that the parameter optimization based iterative learning control proposed in this paper achieves lower torque ripple during steady-state operation and short regulating time of dynamic response, thus satisfying the demands for both steady state and dynamic performance of the speed regulating system.

NETLA Based Optimal Synthesis Method of Binary Neural Network for Pattern Recognition

  • Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.216-221
    • /
    • 2004
  • This paper describes an optimal synthesis method of binary neural network for pattern recognition. Our objective is to minimize the number of connections and the number of neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm (NETLA) for the multilayered neural networks. The synthesis method in NETLA uses the Expanded Sum of Product (ESP) of the boolean expressions and is based on the multilayer perceptron. It has an ability to optimize a given binary neural network in the binary space without any iterative learning as the conventional Error Back Propagation (EBP) algorithm. Furthermore, NETLA can reduce the number of the required neurons in hidden layer and the number of connections. Therefore, this learning algorithm can speed up training for the pattern recognition problems. The superiority of NETLA to other learning algorithms is demonstrated by an practical application to the approximation problem of a circular region.

Optimal Heating Load Identification using a DRNN (DRNN을 이용한 최적 난방부하 식별)

  • Chung, Kee-Chull;Yang, Hai-Won
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.10
    • /
    • pp.1231-1238
    • /
    • 1999
  • This paper presents an approach for the optimal heating load Identification using Diagonal Recurrent Neural Networks(DRNN). In this paper, the DRNN captures the dynamic nature of a system and since it is not fully connected, training is much faster than a fully connected recurrent neural network. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself. In this study, A dynamic backpropagation (DBP) with delta-bar-delta learning method is used to train an optimal heating load identifier. Delta-bar-delta learning method is an empirical method to adapt the learning rate gradually during the training period in order to improve accuracy in a short time. The simulation results based on experimental data show that the proposed model is superior to the other methods in most cases, in regard of not only learning speed but also identification accuracy.

  • PDF

Batch-mode Learning in Neural Networks (신경회로망에서 일괄 학습)

  • 김명찬;최종호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.3
    • /
    • pp.503-511
    • /
    • 1995
  • A batch-mode algorithm is proposed to increase the speed of learning in the error backpropagation algorithm with variable learning rate and variable momentum parameters in classification problems. The objective function is normalized with respect to the number of patterns and output nodes. Also the gradient of the objective function is normalized in updating the connection weights to increase the effect of its backpropagated error. The learning rate and momentum parameters are determined from a function of the gradient norm and the number of weights. The learning rate depends on the square rott of the gradient norm while the momentum parameters depend on the gradient norm. In the two typical classification problems, simulation results demonstrate the effectiveness of the proposed algorithm.

  • PDF