• Title/Summary/Keyword: Parallel-SGD

Search Result 2, Processing Time 0.014 seconds

A survey on parallel training algorithms for deep neural networks (심층 신경망 병렬 학습 방법 연구 동향)

  • Yook, Dongsuk;Lee, Hyowon;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.505-514
    • /
    • 2020
  • Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.

High-quality data collection for machine learning using block chain (블록체인을 활용한 양질의 기계학습용 데이터 수집 방안 연구)

  • Kim, Youngrang;Woo, Junghoon;Lee, Jaehwan;Shin, Ji Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.13-19
    • /
    • 2019
  • The accuracy of machine learning is greatly affected by amount of learning data and quality of data. Collecting existing Web-based learning data has danger that data unrelated to actual learning can be collected, and it is impossible to secure data transparency. In this paper, we propose a method for collecting data directly in parallel by blocks in a block - chain structure, and comparing the data collected by each block with data in other blocks to select only good data. In the proposed system, each block shares data with each other through a chain of blocks, utilizes the All-reduce structure of Parallel-SGD to select only good quality data through comparison with other block data to construct a learning data set. Also, in order to verify the performance of the proposed architecture, we verify that the original image is only good data among the modulated images using the existing benchmark data set.