• Title/Summary/Keyword: Distributed Training

Search Result 367, Processing Time 0.033 seconds

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

The Parallel ANN(Artificial Neural Network) Simulator using Mobile Agent (이동 에이전트를 이용한 병렬 인공신경망 시뮬레이터)

  • Cho, Yong-Man;Kang, Tae-Won
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.615-624
    • /
    • 2006
  • The objective of this paper is to implement parallel multi-layer ANN(Artificial Neural Network) simulator based on the mobile agent system which is executed in parallel in the virtual parallel distributed computing environment. The Multi-Layer Neural Network is classified by training session, training data layer, node, md weight in the parallelization-level. In this study, We have developed and evaluated the simulator with which it is feasible to parallel the ANN in the training session and training data parallelization because these have relatively few network traffic. In this results, we have verified that the performance of parallelization is high about 3.3 times in the training session and training data. The great significance of this paper is that the performance of ANN's execution on virtual parallel computer is similar to that of ANN's execution on existing super-computer. Therefore, we think that the virtual parallel computer can be considerably helpful in developing the neural network because it decreases the training time which needs extra-time.

A General Distributed Deep Learning Platform: A Review of Apache SINGA

  • Lee, Chonho;Wang, Wei;Zhang, Meihui;Ooi, Beng Chin
    • Communications of the Korean Institute of Information Scientists and Engineers
    • /
    • v.34 no.3
    • /
    • pp.31-34
    • /
    • 2016
  • This article reviews Apache SINGA, a general distributed deep learning (DL) platform. The system components and its architecture are presented, as well as how to configure and run SINGA for different types of distributed training using model/data partitioning. Besides, several features and performance are compared with other popular DL tools.

호스피스 자원봉사자 교육이 영적 안녕에 미치는 효과

  • Min, Sun;Jeong, Gyeong-In;Ju, Ri-Ae
    • Korean Journal of Hospice Care
    • /
    • v.3 no.2
    • /
    • pp.12-18
    • /
    • 2003
  • The purpose of this study was to promote the popularization of hospice services by providing the information about the influences of hospice training on participants. We compared differences of pre-training and post-training by use of questionnaire. This study involved 59 volunteers participating in the hospice training held by one hospice center located in K-city. The questionnaire was composed of 41 items, 21 items of general information and 20 items of information about spiritual welling-being. We applied Choi's translated version(1990), originally distributed by Paloutzion and Ellison(1982), in the assessment of participants' changed spiritual welling-being score. Participants were asked to fill out the questionnaire before and after the hospice training. The data were analyzed by frequency, paired t-test. The results were as follows, There were significant differences in participants' spiritual welling-being score. Compared with pretraining(3.51), more spiritual well-being score were improved in post-training(3.69)(t=-2.45, p<.05). The results of this study indicate that hospice training improve spiritual well-being score to the participants. In conclusion, hospice training should be popularized in the near future.

  • PDF

Energy-efficient data transmission technique for wireless sensor networks based on DSC and virtual MIMO

  • Singh, Manish Kumar;Amin, Syed Intekhab
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.341-350
    • /
    • 2020
  • In a wireless sensor network (WSN), the data transmission technique based on the cooperative multiple-input multiple-output (CMIMO) scheme reduces the energy consumption of sensor nodes quite effectively by utilizing the space-time block coding scheme. However, in networks with high node density, the scheme is ineffective due to the high degree of correlated data. Therefore, to enhance the energy efficiency in high node density WSNs, we implemented the distributed source coding (DSC) with the virtual multiple-input multiple-output (MIMO) data transmission technique in the WSNs. The DSC-MIMO first compresses redundant source data using the DSC and then sends it to a virtual MIMO link. The results reveal that, in the DSC-MIMO scheme, energy consumption is lower than that in the CMIMO technique; it is also lower in the DSC single-input single-output (SISO) scheme, compared to that in the SISO technique at various code rates, compression rates, and training overhead factors. The results also indicate that the energy consumption per bit is directly proportional to the velocity and training overhead factor in all the energy saving schemes.

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Wu, Shengbin;Wang, Yibai
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.441-452
    • /
    • 2021
  • Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task's similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

Entrepreneurship and Training Programs for Young Entrepreneurs in the New Era: An Empirical Study from Indonesia

  • MUSLIM, Abdul;NADIROH, Nadiroh;ARINI, Dewi Eka
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.10 no.1
    • /
    • pp.169-179
    • /
    • 2023
  • This study aims to determine the factors that influence training programs in increasing entrepreneurial success as a new model for developing entrepreneurship training in a new era. It intended to provide a suggestion for building an entrepreneurship training model for Beginner Young Entrepreneurs (BYE) organized by the Ministry of Youth and Sports of Indonesia. The study used a quantitative method by collecting data through a Google form questionnaire distributed via the WhatsApp group. This study employs samples from 358 BYE training participants for 2017-2020, and data was processed using Amos SEM software to analyze factors that influence the success of entrepreneurship. The results showed that entrepreneurial motivation is a partial mediator in increasing the effect of training on its success by BYE participants. Furthermore, the key factor for increasing entrepreneurial motivation is challenging young people to start businesses. This study recommends that BYE program policymakers build a training model by considering many practical case studies to increase motivation as an important mediator in influencing entrepreneurial success. Meanwhile, to boost the morale of training participants, it is necessary to add significant real challenges for participants to start entrepreneurship. Moreover, future studies should add other independent variables, such as personality.

Development on AR-Based Operator Training Simulator(OTS) for Chemical Process Capable of Multi-Collaboration (다중협업이 가능한 AR 기반 화학공정 운전원 교육 시뮬레이터(OTS-Simulator) 개발)

  • Lee, Jun-Seo;Ma, Byung-Chol;An, Su-Bin
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.22-30
    • /
    • 2022
  • In order to prevent chemical accidents caused by human error, a chemical accident prevention and response training program using advanced technology was developed. After designing a virtual process based on the previously built pilot plant, chemical accident response contents were developed. A part of the pilot facility was remodeled for content realization and a remote control function was given. In addition, a DCS program that can control facilities in a virtual environment was developed, and chemical process operator training (OTS) that can finally respond to virtual chemical accidents was developed in conjunction with AR. Through this, trainees can build driving skills by directly operating the device, and by responding to virtual chemical accidents, they can develop emergency response capabilities. If the next-generation OTS like this study is widely distributed in the chemical industry, it is expected to greatly contribute to the prevention of chemical accidents caused by human error.

The Application of Distributed Synthetic Environment Data to a Military Simulation (분포형 합성환경자료의 군사시뮬레이션 적용)

  • Cho, Nae-Hyun;Park, Jong-Chul;Kim, Man-Kyu
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.235-247
    • /
    • 2010
  • An environmental factor is very important in a war game model supporting military training. Most war game models in Korean armed forces apply the same weather conditions to all operation areas. As a result, it fails to derive a high-fidelity simulation result. For this reason this study attempts to develop factor techniques for a high-fidelity war game that can apply distributed synthetic atmospheric environment modeling data to a military simulation. The major developed factor technology of this study applies regional distributed precipitation data to the 2D-GIS based Simplified Detection Probability Model(SDPM) that was developed for this study. By doing this, this study shows that diversely distributed local weather conditions can be applied to a military simulation depending on the model resolution from theater level to engineering level, on the use from training model to analytical model, and on the description level from corps level to battalion level.

Systematic Research on Privacy-Preserving Distributed Machine Learning (프라이버시를 보호하는 분산 기계 학습 연구 동향)

  • Min Seob Lee;Young Ah Shin;Ji Young Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.76-90
    • /
    • 2024
  • Although artificial intelligence (AI) can be utilized in various domains such as smart city, healthcare, it is limited due to concerns about the exposure of personal and sensitive information. In response, the concept of distributed machine learning has emerged, wherein learning occurs locally before training a global model, mitigating the concentration of data on a central server. However, overall learning phase in a collaborative way among multiple participants poses threats to data privacy. In this paper, we systematically analyzes recent trends in privacy protection within the realm of distributed machine learning, considering factors such as the presence of a central server, distribution environment of the training datasets, and performance variations among participants. In particular, we focus on key distributed machine learning techniques, including horizontal federated learning, vertical federated learning, and swarm learning. We examine privacy protection mechanisms within these techniques and explores potential directions for future research.