• Title/Summary/Keyword: Distributed Learning Environment

검색결과 140건 처리시간 0.02초

개방·공유·참여의 대학 교육환경 구축 사례 (A Framework for Open, Flexible and Distributed Learning Environment for Higher Education)

  • 강명희;유지원
    • 지식경영연구
    • /
    • 제9권4호
    • /
    • pp.17-33
    • /
    • 2008
  • This study proposes University 2.0 as a model case of open, flexible, and distributed learning environment for higher education based on theoretical foundations and perspectives. As web 2.0 technologies emerge into the field of education, ways of generating and disseminating information and knowledge have been drastically changed. Professors are no longer the only source of knowledge. Students using internet often become prosumers of knowledge who search and access information through the web as well as publish their own knowledge using the web. A concept and framework of University 2.0 is introduced for implementing the new interactive learning paradigm with an open, flexible and distributed learning environment for higher education. University 2.0 incorporates online and offline learning environments with various educational media. Furthermore, it employs various learning strategies and integrates formal and informal learning through learning communities. Both instructors and students in University 2.0 environment are expected to be active knowledge generators as well as creative designers of their own learning and teaching.

  • PDF

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제16권2호
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

Unification of Deep Learning Model trained by Parallel Learning in Security environment

  • Lee, Jong-Lark
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권12호
    • /
    • pp.69-75
    • /
    • 2021
  • 최근 인공지능 분야에서 가장 많이 사용하는 딥러닝은 그 구조가 점차 크고 복잡해지고 있다. 딥러닝 모델이 커질수록 이를 학습시키기 위해서는 대용량의 데이터가 필요하지만 데이터가 여러 소유 주체별로 분산되어 있고 보안 문제로 인해 이를 통합하여 학습시키기 어려운 경우가 발생한다. 우리는 동일한 딥러닝 모형이 필요하지만 보안 문제로 인해 데이터가 여러곳에 분산되어 처리될 수 밖에 없는 상황에서 데이터를 소유하고 있는 주체별로 분산 학습을 수행한 후 이를 통합하는 방법을 연구하였다. 이를 위해 보안 상황을 V-환경과 H-환경으로 가정하여 소유 주체별로 분산학습을 수행했으며 Average, Max, AbsMax를 사용하여 분산학습된 결과를 통합하였다. mnist-fashion 데이터에 이를 적용해 본 결과 V-환경에서는 정확도 면에서 데이터를 통합시켜 학습한 결과와 큰 차이가 없음을 확인할 수 있었으며, H-환경에서는 차이는 존재하지만 의미있는 결과를 얻을 수 있었다.

Empirical Performance Evaluation of Communication Libraries for Multi-GPU based Distributed Deep Learning in a Container Environment

  • Choi, HyeonSeong;Kim, Youngrang;Lee, Jaehwan;Kim, Yoonhee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권3호
    • /
    • pp.911-931
    • /
    • 2021
  • Recently, most cloud services use Docker container environment to provide their services. However, there are no researches to evaluate the performance of communication libraries for multi-GPU based distributed deep learning in a Docker container environment. In this paper, we propose an efficient communication architecture for multi-GPU based deep learning in a Docker container environment by evaluating the performances of various communication libraries. We compare the performances of the parameter server architecture and the All-reduce architecture, which are typical distributed deep learning architectures. Further, we analyze the performances of two separate multi-GPU resource allocation policies - allocating a single GPU to each Docker container and allocating multiple GPUs to each Docker container. We also experiment with the scalability of collective communication by increasing the number of GPUs from one to four. Through experiments, we compare OpenMPI and MPICH, which are representative open source MPI libraries, and NCCL, which is NVIDIA's collective communication library for the multi-GPU setting. In the parameter server architecture, we show that using CUDA-aware OpenMPI with multi-GPU per Docker container environment reduces communication latency by up to 75%. Also, we show that using NCCL in All-reduce architecture reduces communication latency by up to 93% compared to other libraries.

쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법 (Distributed In-Memory Caching Method for ML Workload in Kubernetes)

  • 윤동현;송석일
    • Journal of Platform Technology
    • /
    • 제11권4호
    • /
    • pp.71-79
    • /
    • 2023
  • 이 논문에서는 기계학습 워크로드의 특징을 분석하고 이를 기반으로 기계학습 워크로드의 성능 향상을 위한 분산 인-메모리 캐싱 기법을 제안한다. 기계학습 워크로드의 핵심은 모델 학습이며 모델 학습은 컴퓨팅 집약적 (Computation Intensive)인 작업이다. 쿠버네티스 기반 클라우드 환경에서 컴퓨팅 프레임워크와 스토리지를 분리한 구조에서 기계학습 워크로드를 수행하는 것은 자원을 효과적으로 할당할 수 있지만, 네트워크 통신을 통해 IO가 수행되야 하므로 지연이 발생할 수 있다. 이 논문에서는 이런 환경에서 수행되는 머신러닝 워크로드의 성능을 향상하기 위한 분산 인-메모리 캐싱 기법을 제안한다. 특히, 제안하는 방법은 쿠버네티스 기반의 머신러닝 파이프라인 관리 도구인 쿠브플로우를 고려하여 머신러닝 워크로드에 필요한 데이터를 분산 인-메모리 캐시에 미리 로드하는 새로운 방법을 제안한다.

  • PDF

U-Learning: An Interactive Social Learning Model

  • Caytiles, Ronnie D.;Kim, Hye-jin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제5권1호
    • /
    • pp.9-13
    • /
    • 2013
  • This paper presents the concepts of ubiquitous computing technology to construct a ubiquitous learning environment that enables learning to take place anywhere at any time. This ubiquitous learning environment is described as an environment that supports students' learning using digital media in geographically distributed environments. The u-learning model is a web-based e-learning system that could enable learners to acquire knowledge and skills through interaction between them and the ubiquitous learning environment. Students are allowed to be in an environment of their interest. The communication between devices and the embedded computers in the environment allows learner to learn while they are moving, hence, attaching them to their learning environment.

기계학습 분산 환경을 위한 부하 분산 기법 (Load Balancing Scheme for Machine Learning Distributed Environment)

  • 김영관;이주석;김아정;홍지만
    • 스마트미디어저널
    • /
    • 제10권1호
    • /
    • pp.25-31
    • /
    • 2021
  • 기계학습이 보편화되면서 기계학습을 활용한 응용 개발 또한 활발하게 이루어지고 있다. 또한 이러한 응용 개발을 지원하기 위한 기계학습 플랫폼 연구도 활발하게 진행되고 있다. 그러나 기계학습 플랫폼 연구가 활발하게 진행되고 있음에도 불구하고 기계학습 플랫폼에 적절한 부하 분산에 관한 연구는 아직 부족하다. 따라서 본 논문에서는 기계학습 분산 환경을 위한 부하 분산 기법을 제안한다. 제안하는 기법은 분산 서버를 레벨 해시 테이블 구조로 구성하고 각 서버의 성능을 고려하여 기계학습 작업을 서버에 할당한다. 이후 분산 서버를 구현하여 실험하고 기존 해싱 기법과 성능을 비교하였다. 제안하는 기법을 기존 해싱 기법과 비교하였을 때 평균 약 26%의 속도 향상을 보였고, 서버에 할당되지 못하고 대기하는 작업의 수가 약 38% 이상 감소함을 보였다.

Autonomous and Asynchronous Triggered Agent Exploratory Path-planning Via a Terrain Clutter-index using Reinforcement Learning

  • Kim, Min-Suk;Kim, Hwankuk
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.181-188
    • /
    • 2022
  • An intelligent distributed multi-agent system (IDMS) using reinforcement learning (RL) is a challenging and intricate problem in which single or multiple agent(s) aim to achieve their specific goals (sub-goal and final goal), where they move their states in a complex and cluttered environment. The environment provided by the IDMS provides a cumulative optimal reward for each action based on the policy of the learning process. Most actions involve interacting with a given IDMS environment; therefore, it can provide the following elements: a starting agent state, multiple obstacles, agent goals, and a cluttered index. The reward in the environment is also reflected by RL-based agents, in which agents can move randomly or intelligently to reach their respective goals, to improve the agent learning performance. We extend different cases of intelligent multi-agent systems from our previous works: (a) a proposed environment-clutter-based-index for agent sub-goal selection and analysis of its effect, and (b) a newly proposed RL reward scheme based on the environmental clutter-index to identify and analyze the prerequisites and conditions for improving the overall system.

Reinforcement learning multi-agent using unsupervised learning in a distributed cloud environment

  • Gu, Seo-Yeon;Moon, Seok-Jae;Park, Byung-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권2호
    • /
    • pp.192-198
    • /
    • 2022
  • Companies are building and utilizing their own data analysis systems according to business characteristics in the distributed cloud. However, as businesses and data types become more complex and diverse, the demand for more efficient analytics has increased. In response to these demands, in this paper, we propose an unsupervised learning-based data analysis agent to which reinforcement learning is applied for effective data analysis. The proposal agent consists of reinforcement learning processing manager and unsupervised learning manager modules. These two modules configure an agent with k-means clustering on multiple nodes and then perform distributed training on multiple data sets. This enables data analysis in a relatively short time compared to conventional systems that perform analysis of large-scale data in one batch.

FedGCD: Federated Learning Algorithm with GNN based Community Detection for Heterogeneous Data

  • Wooseok Shin;Jitae Shin
    • 인터넷정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.1-11
    • /
    • 2023
  • Federated learning (FL) is a ground breaking machine learning paradigm that allow smultiple participants to collaboratively train models in a cloud environment, all while maintaining the privacy of their raw data. This approach is in valuable in applications involving sensitive or geographically distributed data. However, one of the challenges in FL is dealing with heterogeneous and non-independent and identically distributed (non-IID) data across participants, which can result in suboptimal model performance compared to traditionalmachine learning methods. To tackle this, we introduce FedGCD, a novel FL algorithm that employs Graph Neural Network (GNN)-based community detection to enhance model convergence in federated settings. In our experiments, FedGCD consistently outperformed existing FL algorithms in various scenarios: for instance, in a non-IID environment, it achieved an accuracy of 0.9113, a precision of 0.8798,and an F1-Score of 0.8972. In a semi-IID setting, it demonstrated the highest accuracy at 0.9315 and an impressive F1-Score of 0.9312. We also introduce a new metric, nonIIDness, to quantitatively measure the degree of data heterogeneity. Our results indicate that FedGCD not only addresses the challenges of data heterogeneity and non-IIDness but also sets new benchmarks for FL algorithms. The community detection approach adopted in FedGCD has broader implications, suggesting that it could be adapted for other distributed machine learning scenarios, thereby improving model performance and convergence across a range of applications.