• Title/Summary/Keyword: distributed learning

Search Result 588, Processing Time 0.034 seconds

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.

Fuzzy Q-learning using Distributed Eligibility (분포 기여도를 이용한 퍼지 Q-learning)

  • 정석일;이연정
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.388-394
    • /
    • 2001
  • Reinforcement learning is a kind of unsupervised learning methods that an agent control rules from experiences acquired by interactions with environment. The eligibility is used to resolve the credit-assignment problem which is one of important problems in reinforcement learning, Conventional eligibilities such as the accumulating eligibility and the replacing eligibility are ineffective in use of rewards acquired in learning process, since on1y one executed action for a visited state is learned. In this paper, we propose a new eligibility, called the distributed eligibility, with which not only an executed action but also neighboring actions in a visited state are to be learned. The fuzzy Q-learning algorithm using the proposed eligibility is applied to a cart-pole balancing problem, which shows the superiority of the proposed method to conventional methods in terms of learning speed.

  • PDF

Distributed AI Learning-based Proof-of-Work Consensus Algorithm (분산 인공지능 학습 기반 작업증명 합의알고리즘)

  • Won-Boo Chae;Jong-Sou Park
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • The proof-of-work consensus algorithm used by most blockchains is causing a massive waste of computing resources in the form of mining. A useful proof-of-work consensus algorithm has been studied to reduce the waste of computing resources in proof-of-work, but there are still resource waste and mining centralization problems when creating blocks. In this paper, the problem of resource waste in block generation was solved by replacing the relatively inefficient computation process for block generation with distributed artificial intelligence model learning. In addition, by providing fair rewards to nodes participating in the learning process, nodes with weak computing power were motivated to participate, and performance similar to the existing centralized AI learning method was maintained. To show the validity of the proposed methodology, we implemented a blockchain network capable of distributed AI learning and experimented with reward distribution through resource verification, and compared the results of the existing centralized learning method and the blockchain distributed AI learning method. In addition, as a future study, the thesis was concluded by suggesting problems and development directions that may occur when expanding the blockchain main network and artificial intelligence model.

Performance Factor of Distributed Processing of Machine Learning using Spark (스파크를 이용한 머신러닝의 분산 처리 성능 요인)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.19-24
    • /
    • 2021
  • In this paper, we study performance factor of machine learning in the distributed environment using Apache Spark and presents an efficient distributed processing method through experiments. This work firstly presents performance factor when performing machine learning in a distributed cluster by classifying cluster performance, data size, and configuration of spark engine. In addition, performance study of regression analysis using Spark MLlib running on the Hadoop cluster is performed while changing the configuration of the node and the Spark Executor. As a result of the experiment, it was confirmed that the effective number of executors was affected by the number of data blocks, but depending on the cluster size, the maximum and minimum values were limited by the number of cores and the number of worker nodes, respectively.

Cooperative Detection of Moving Source Signals in Sensor Networks (센서 네트워크 환경에서 움직이는 소스 신호의 협업 검출 기법)

  • Nguyen, Minh N.H.;Chuan, Pham;Hong, Choong Seon
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.726-732
    • /
    • 2017
  • In practical distributed sensing and prediction applications over wireless sensor networks (WSN), environmental sensing activities are highly dynamic because of noisy sensory information from moving source signals. The recent distributed online convex optimization frameworks have been developed as promising approaches for solving approximately stochastic learning problems over network of sensors in a distributed manner. Negligence of mobility consequence in the original distributed saddle point algorithm (DSPA) could strongly affect the convergence rate and stability of learning results. In this paper, we propose an integrated sliding windows mechanism in order to stabilize predictions and achieve better convergence rates in cooperative detection of a moving source signal scenario.

Unification of Deep Learning Model trained by Parallel Learning in Security environment

  • Lee, Jong-Lark
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.69-75
    • /
    • 2021
  • Recently, deep learning, which is the most used in the field of artificial intelligence, has a structure that is gradually becoming larger and more complex. As the deep learning model grows, a large amount of data is required to learn it, but there are cases in which it is difficult to integrate and learn the data because the data is distributed among several owners and security issues. In that situation we conducted parallel learning for each users that own data and then studied how to integrate it. For this, distributed learning was performed for each owner assuming the security situation as V-environment and H-environment, and the results of distributed learning were integrated using Average, Max, and AbsMax. As a result of applying this to the mnist-fashion data, it was confirmed that there was no significant difference from the results obtained by integrating the data in the V-environment in terms of accuracy. In the H-environment, although there was a difference, meaningful results were obtained.

Performance Improvement of Evolution Strategies using Reinforcement Learning

  • Sim, Kwee-Bo;Chun, Ho-Byung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.1 no.1
    • /
    • pp.125-130
    • /
    • 2001
  • In this paper, we propose a new type of evolution strategies combined with reinforcement learning. We use the variances of fitness occurred by mutation to make the reinforcement signals which estimate and control the step length of mutation. With this proposed method, the convergence rate is improved. Also, we use cauchy distributed mutation to increase global convergence faculty. Cauchy distributed mutation is more likely to escape from a local minimum or move away from a plateau. After an outline of the history of evolution strategies, it is explained how evolution strategies can be combined with the reinforcement learning, named reinforcement evolution strategies. The performance of proposed method will be estimated by comparison with conventional evolution strategies on several test problems.

  • PDF

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Wu, Shengbin;Wang, Yibai
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.441-452
    • /
    • 2021
  • Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task's similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

A General Distributed Deep Learning Platform: A Review of Apache SINGA

  • Lee, Chonho;Wang, Wei;Zhang, Meihui;Ooi, Beng Chin
    • Communications of the Korean Institute of Information Scientists and Engineers
    • /
    • v.34 no.3
    • /
    • pp.31-34
    • /
    • 2016
  • This article reviews Apache SINGA, a general distributed deep learning (DL) platform. The system components and its architecture are presented, as well as how to configure and run SINGA for different types of distributed training using model/data partitioning. Besides, several features and performance are compared with other popular DL tools.

Combination Methods for Distribution Codes (분산 부호의 결합 기법)

  • Chung, Jin-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.365-366
    • /
    • 2022
  • The distributed code is a type of linear codes that can be used for coding and federated learning for privacy. In the distributed code, privacy or confidential information is not dependent to each other because the information of each code is not included with other codes. In this paper, we examine the properties of these distributed codes and present techniques for synthesizing new sets of distributed codes from previously known distributed codes. In addition, we propose several scenarios in which combined codes can be used.

  • PDF