• 제목/요약/키워드: distributed learning

검색결과 589건 처리시간 0.032초

FedGCD: Federated Learning Algorithm with GNN based Community Detection for Heterogeneous Data

  • Wooseok Shin;Jitae Shin
    • 인터넷정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.1-11
    • /
    • 2023
  • Federated learning (FL) is a ground breaking machine learning paradigm that allow smultiple participants to collaboratively train models in a cloud environment, all while maintaining the privacy of their raw data. This approach is in valuable in applications involving sensitive or geographically distributed data. However, one of the challenges in FL is dealing with heterogeneous and non-independent and identically distributed (non-IID) data across participants, which can result in suboptimal model performance compared to traditionalmachine learning methods. To tackle this, we introduce FedGCD, a novel FL algorithm that employs Graph Neural Network (GNN)-based community detection to enhance model convergence in federated settings. In our experiments, FedGCD consistently outperformed existing FL algorithms in various scenarios: for instance, in a non-IID environment, it achieved an accuracy of 0.9113, a precision of 0.8798,and an F1-Score of 0.8972. In a semi-IID setting, it demonstrated the highest accuracy at 0.9315 and an impressive F1-Score of 0.9312. We also introduce a new metric, nonIIDness, to quantitatively measure the degree of data heterogeneity. Our results indicate that FedGCD not only addresses the challenges of data heterogeneity and non-IIDness but also sets new benchmarks for FL algorithms. The community detection approach adopted in FedGCD has broader implications, suggesting that it could be adapted for other distributed machine learning scenarios, thereby improving model performance and convergence across a range of applications.

Map-Reduce 프로그래밍 모델 기반의 나이브 베이스 학습 알고리즘 (Naive Bayes Learning Algorithm based on Map-Reduce Programming Model)

  • 강대기
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2011년도 추계학술대회
    • /
    • pp.208-209
    • /
    • 2011
  • 본 논문에서는, 맵-리듀스 모델 기반에서 나이브 베이스 알고리즘으로 학습과 추론을 수행하는 방안에 대해 소개하고자 한다. 이를 위해 Apache Mahout를 이용하여 분산 나이브 베이스 (Distributed Naive Bayes) 학습 알고리즘을 University of California, Irvine (UCI)의 벤치마크 데이터 집합에 적용하였다. 실험 결과, Apache Mahout의 분산 나이브 베이스 학습 알고리즘은 일반적인 WEKA의 나이브 베이스 학습 알고리즘과 그 성능면에서 큰 차이가 없음을 알 수 있었다. 이러한 결과는, 향후 빅 데이터 환경에서 Apache Mahout와 같은 맵-리듀스 모델 기반 시스템이 기계 학습에 큰 기여를 할 수 있음을 나타내는 것이다.

  • PDF

Empirical Performance Evaluation of Communication Libraries for Multi-GPU based Distributed Deep Learning in a Container Environment

  • Choi, HyeonSeong;Kim, Youngrang;Lee, Jaehwan;Kim, Yoonhee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권3호
    • /
    • pp.911-931
    • /
    • 2021
  • Recently, most cloud services use Docker container environment to provide their services. However, there are no researches to evaluate the performance of communication libraries for multi-GPU based distributed deep learning in a Docker container environment. In this paper, we propose an efficient communication architecture for multi-GPU based deep learning in a Docker container environment by evaluating the performances of various communication libraries. We compare the performances of the parameter server architecture and the All-reduce architecture, which are typical distributed deep learning architectures. Further, we analyze the performances of two separate multi-GPU resource allocation policies - allocating a single GPU to each Docker container and allocating multiple GPUs to each Docker container. We also experiment with the scalability of collective communication by increasing the number of GPUs from one to four. Through experiments, we compare OpenMPI and MPICH, which are representative open source MPI libraries, and NCCL, which is NVIDIA's collective communication library for the multi-GPU setting. In the parameter server architecture, we show that using CUDA-aware OpenMPI with multi-GPU per Docker container environment reduces communication latency by up to 75%. Also, we show that using NCCL in All-reduce architecture reduces communication latency by up to 93% compared to other libraries.

빅데이터를 위한 H-RTGL 기반 단일 분류기 분산 처리 프레임워크 설계 (Design of Distributed Processing Framework Based on H-RTGL One-class Classifier for Big Data)

  • 김도균;최진영
    • 품질경영학회지
    • /
    • 제48권4호
    • /
    • pp.553-566
    • /
    • 2020
  • Purpose: The purpose of this study was to design a framework for generating one-class classification algorithm based on Hyper-Rectangle(H-RTGL) in a distributed environment connected by network. Methods: At first, we devised one-class classifier based on H-RTGL which can be performed by distributed computing nodes considering model and data parallelism. Then, we also designed facilitating components for execution of distributed processing. In the end, we validate both effectiveness and efficiency of the classifier obtained from the proposed framework by a numerical experiment using data set obtained from UCI machine learning repository. Results: We designed distributed processing framework capable of one-class classification based on H-RTGL in distributed environment consisting of physically separated computing nodes. It includes components for implementation of model and data parallelism, which enables distributed generation of classifier. From a numerical experiment, we could observe that there was no significant change of classification performance assessed by statistical test and elapsed time was reduced due to application of distributed processing in dataset with considerable size. Conclusion: Based on such result, we can conclude that application of distributed processing for generating classifier can preserve classification performance and it can improve the efficiency of classification algorithms. In addition, we suggested an idea for future research directions of this paper as well as limitation of our work.

쿠버네티스에서 분산 학습 작업 성능 향상을 위한 오토스케일링 기반 동적 자원 조정 오퍼레이터 (Dynamic Resource Adjustment Operator Based on Autoscaling for Improving Distributed Training Job Performance on Kubernetes)

  • 정진원;유헌창
    • 정보처리학회논문지:컴퓨터 및 통신 시스템
    • /
    • 제11권7호
    • /
    • pp.205-216
    • /
    • 2022
  • 딥러닝 분산 학습에 사용되는 많은 도구 중 하나는 컨테이너 오케스트레이션 도구인 쿠버네티스에서 실행되는 큐브플로우이다. 그리고 큐브플로우에서 기본적으로 제공하는 오퍼레이터를 사용하여 텐서플로우 학습 작업을 관리할 수 있다. 하지만 파라미터 서버 아키텍처 기반의 딥러닝 분산 학습 작업을 고려할 때 기존의 오퍼레이터가 사용하는 스케줄링 정책은 분산학습 작업의 태스크 친화도를 고려하지 않으며 자원을 동적으로 할당하거나 해제하는 기능을 제공하지 않는다. 이는 작업의 완료 시간이 오래 걸리거나 낮은 자원 활용률로 이어질 수 있다. 따라서 본 논문에서는 작업의 완료 시간을 단축시키고 자원 활용률을 높이기 위해 딥러닝 분산 학습 작업을 효율적으로 스케줄링하는 새로운 오퍼레이터를 제안한다. 기존 오퍼레이터를 수정하여 새로운 오퍼레이터를 구현하고 성능 평가를 위한 실험을 수행한 결과, 제안한 스케줄링 정책은 평균 작업 완료 시간 감소율을 최대 84%, 평균 CPU 활용 증가율을 최대 92%까지 향상시킬 수 있음을 보여준다.

연합학습을 위한 패턴 및 그룹 기반 효율적인 분산 합의 최적화 (Efficient distributed consensus optimization based on patterns and groups for federated learning)

  • 강승주;천지영;노건태;정익래
    • 인터넷정보학회논문지
    • /
    • 제23권4호
    • /
    • pp.73-85
    • /
    • 2022
  • 인공지능으로 자동화와 연결성이 극대화되는 4차 산업혁명 시대를 맞이하여 모델의 업데이트를 위한 데이터 수집과 활용의 중요성이 점차 높아지고 있다. 인공지능 기술을 사용하여 모델을 생성하기 위해서는 일반적으로 데이터를 한곳에 모아야 업데이트할 수 있으나, 이런 경우 사용자의 개인정보를 침해할 수 있다. 본 논문에서는 분산 저장된 데이터를 직접 공유하지 않으면서 서로 협력하여 모델을 업데이트할 수 있는 분산형 기계학습 방법인 연합학습을 소개하며, 기존의 서버 없이 참여자들 간의 분산 합의 최적화를 이루는 연구를 소개한다. 또한, Kirkman Triple System을 기반으로 한 패턴 및 그룹을 생성하는 알고리즘을 이용하며, 병렬적인 업데이트 및 통신을 하는 패턴 및 그룹 기반 분산 합의 최적화 알고리즘을 제안한다. 이러한 알고리즘은 기존의 분산 합의 최적화 알고리즘 이상의 프라이버시를 보장하며, 모델이 수렴할 때까지의 통신시간을 감소시킨다.

분산 시스템에서 파일 이전과 부하 균등을 위한 수학적 모델 (Mathematical Model for File Migration and Load Balancing in Distributed Systemsc)

  • 문원식
    • 디지털산업정보학회논문지
    • /
    • 제13권4호
    • /
    • pp.153-162
    • /
    • 2017
  • Advances in communication technologies and the decreasing cost of computers have made distributed computer systems an attractive alternative for satisfying the information needs of large organizations. This paper presents a distributed algorithm for performance improvement through load balancing and file migration in distributed systems. We employed a sender initiated strategy for task migration and used learning automata with several internal states for file migration. A task can be migrated according to the load information of a computer. A file is migrated to the destination processor when it is in the right boundary state. We also described an analytical model for load balancing with file migration to verify the proposed algorithm. Analytical and simulation results show that our algorithm is very well-suited for distributed system environments.

Privacy-Preserving Deep Learning using Collaborative Learning of Neural Network Model

  • Hye-Kyeong Ko
    • International journal of advanced smart convergence
    • /
    • 제12권2호
    • /
    • pp.56-66
    • /
    • 2023
  • The goal of deep learning is to extract complex features from multidimensional data use the features to create models that connect input and output. Deep learning is a process of learning nonlinear features and functions from complex data, and the user data that is employed to train deep learning models has become the focus of privacy concerns. Companies that collect user's sensitive personal information, such as users' images and voices, own this data for indefinite period of times. Users cannot delete their personal information, and they cannot limit the purposes for which the data is used. The study has designed a deep learning method that employs privacy protection technology that uses distributed collaborative learning so that multiple participants can use neural network models collaboratively without sharing the input datasets. To prevent direct leaks of personal information, participants are not shown the training datasets during the model training process, unlike traditional deep learning so that the personal information in the data can be protected. The study used a method that can selectively share subsets via an optimization algorithm that is based on modified distributed stochastic gradient descent, and the result showed that it was possible to learn with improved learning accuracy while protecting personal information.

기술용어 분산표현을 활용한 특허문헌 분류에 관한 연구 (A Study on Patent Literature Classification Using Distributed Representation of Technical Terms)

  • 최윤수;최성필
    • 한국문헌정보학회지
    • /
    • 제53권2호
    • /
    • pp.179-199
    • /
    • 2019
  • 본 연구의 목적은 특허 문헌 분류에 가장 적합한 방법론을 발견하기 위하여 다양한 자질 추출 방법과 기계학습 및 딥러닝 모델을 살펴보고 실험을 통해 최적의 성능을 제공하는 방법론을 분석하는데 있다. 자질 추출 방법으로는 전통적인 BoW 방법과 분산표현 방식인 워드 임베딩 벡터를 비교 실험하고, 문헌 집합 구축 방식으로는 형태소 분석과 멀티그램을 이용하는 방식을 비교 검토하였다. 또한 전통적인 기계학습 모델과 딥러닝 모델을 이용하여 분류 성능을 검증하였다. 실험 결과, 분산표현 방법과 형태소 분석을 이용한 자질추출 방법을 기반으로 딥러닝 모델을 적용하였을 경우에 분류 성능이 가장 우수한 것으로 판명되었으며 섹션, 클래스, 서브클래스 분류 실험에서 전통적인 기계학습 방법에 비해 각각 5.71%, 18.84%, 21.53% 우수한 분류 성능을 보여주었다.

The Hidden Object Searching Method for Distributed Autonomous Robotic Systems

  • Yoon, Han-Ul;Lee, Dong-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1044-1047
    • /
    • 2005
  • In this paper, we present the strategy of object search for distributed autonomous robotic systems (DARS). The DARS are the systems that consist of multiple autonomous robotic agents to whom required functions are distributed. For instance, the agents should recognize their surrounding at where they are located and generate some rules to act upon by themselves. In this paper, we introduce the strategy for multiple DARS robots to search a hidden object at the unknown area. First, we present an area-based action making process to determine the direction change of the robots during their maneuvers. Second, we also present Q learning adaptation to enhance the area-based action making process. Third, we introduce the coordinate system to represent a robot's current location. In the end of this paper, we show experimental results using hexagon-based Q learning to find the hidden object.

  • PDF