• Title/Summary/Keyword: Federated Learning

Search Result 55, Processing Time 0.035 seconds

Design of Block Codes for Distributed Learning in VR/AR Transmission

  • Seo-Hee Hwang;Si-Yeon Pak;Jin-Ho Chung;Daehwan Kim;Yongwan Kim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.300-305
    • /
    • 2023
  • Audience reactions in response to remote virtual performances must be compressed before being transmitted to the server. The server, which aggregates these data for group insights, requires a distribution code for the transfer. Recently, distributed learning algorithms such as federated learning have gained attention as alternatives that satisfy both the information security and efficiency requirements. In distributed learning, no individual user has access to complete information, and the objective is to achieve a learning effect similar to that achieved with the entire information. It is therefore important to distribute interdependent information among users and subsequently aggregate this information following training. In this paper, we present a new extension technique for minimal code that allows a new minimal code with a different length and Hamming weight to be generated through the product of any vector and a given minimal code. Thus, the proposed technique can generate minimal codes with previously unknown parameters. We also present a scenario wherein these combined methods can be applied.

Efficient distributed consensus optimization based on patterns and groups for federated learning (연합학습을 위한 패턴 및 그룹 기반 효율적인 분산 합의 최적화)

  • Kang, Seung Ju;Chun, Ji Young;Noh, Geontae;Jeong, Ik Rae
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.73-85
    • /
    • 2022
  • In the era of the 4th industrial revolution, where automation and connectivity are maximized with artificial intelligence, the importance of data collection and utilization for model update is increasing. In order to create a model using artificial intelligence technology, it is usually necessary to gather data in one place so that it can be updated, but this can infringe users' privacy. In this paper, we introduce federated learning, a distributed machine learning method that can update models in cooperation without directly sharing distributed stored data, and introduce a study to optimize distributed consensus among participants without an existing server. In addition, we propose a pattern and group-based distributed consensus optimization algorithm that uses an algorithm for generating patterns and groups based on the Kirkman Triple System, and performs parallel updates and communication. This algorithm guarantees more privacy than the existing distributed consensus optimization algorithm and reduces the communication time until the model converges.

Edge Computing Model based on Federated Learning for COVID-19 Clinical Outcome Prediction in the 5G Era

  • Ruochen Huang;Zhiyuan Wei;Wei Feng;Yong Li;Changwei Zhang;Chen Qiu;Mingkai Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.826-842
    • /
    • 2024
  • As 5G and AI continue to develop, there has been a significant surge in the healthcare industry. The COVID-19 pandemic has posed immense challenges to the global health system. This study proposes an FL-supported edge computing model based on federated learning (FL) for predicting clinical outcomes of COVID-19 patients during hospitalization. The model aims to address the challenges posed by the pandemic, such as the need for sophisticated predictive models, privacy concerns, and the non-IID nature of COVID-19 data. The model utilizes the FATE framework, known for its privacy-preserving technologies, to enhance predictive precision while ensuring data privacy and effectively managing data heterogeneity. The model's ability to generalize across diverse datasets and its adaptability in real-world clinical settings are highlighted by the use of SHAP values, which streamline the training process by identifying influential features, thus reducing computational overhead without compromising predictive precision. The study demonstrates that the proposed model achieves comparable precision to specific machine learning models when dataset sizes are identical and surpasses traditional models when larger training data volumes are employed. The model's performance is further improved when trained on datasets from diverse nodes, leading to superior generalization and overall performance, especially in scenarios with insufficient node features. The integration of FL with edge computing contributes significantly to the reliable prediction of COVID-19 patient outcomes with greater privacy. The research contributes to healthcare technology by providing a practical solution for early intervention and personalized treatment plans, leading to improved patient outcomes and efficient resource allocation during public health crises.

A Conceptual Architecture for Ethic-Friendly AI

  • Oktian, Yustus-Eko;Brian, Stanley;Lee, Sang-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.9-17
    • /
    • 2022
  • The state-of-the-art AI systems pose many ethical issues ranging from massive data collection to bias in algorithms. In response, this paper proposes a more ethic-friendly AI architecture by combining Federated Learning(FL) and Blockchain. We discuss the importance of each issues and provide requirements for an ethical AI system to show how our solutions can achieve more ethical paradigms. By committing to our design, adopters can perform AI services more ethically.

A Study on Federated Learning of Non-IID MNIST Data (NoN-IID MNIST 데이터의 연합학습 연구)

  • Joowon Lee;Joonil Bang;Jongwoo Baek;Hwajong Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.533-534
    • /
    • 2023
  • 본 논문에서는 불균형하게 분포된(Non-IID) 데이터를 소유하고 있는 데이터 소유자(클라이언트)들을 가정하고, 데이터 소유자들 간 원본 데이터의 직접적인 이동 없이도 딥러닝 학습이 가능하도록 연합학습을 적용하였다. 실험 환경 구성을 위하여 MNIST 손글씨 데이터 세트를 하나의 숫자만 다량 보유하도록 분할하고 각 클라이언트에게 배포하였다. 연합학습을 적용하여 손글씨 분류 모델을 학습하였을 때 정확도는 85.5%, 중앙집중식 학습모델의 정확도는 90.2%로 연합학습 모델이 중앙집중식 모델 대비 약 95% 수준의 성능을 보여 연합학습 시 성능 하락이 크지 않으며 특수한 상황에서 중앙집중식 학습을 대체할 수 있음을 보였다.

  • PDF

Federated Learning-Internet of Underwater Things (연합 학습기반 수중 사물 인터넷)

  • Shrutika Sinha;G., Pradeep Reddy;Soo-Hyun Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.140-142
    • /
    • 2023
  • Federated learning (FL) is a new paradigm in machine learning (ML) that enables multiple devices to collaboratively train a shared ML model without sharing their local data. FL is well-suited for applications where data is sensitive or difficult to transmit in large volumes, or where collaborative learning is required. The Internet of Underwater Things (IoUT) is a network of underwater devices that collect and exchange data. This data can be used for a variety of applications, such as monitoring water quality, detecting marine life, and tracking underwater vehicles. However, the harsh underwater environment makes it difficult to collect and transmit data in large volumes. FL can address these challenges by enabling devices to train a shared ML model without having to transmit their data to a central server. This can help to protect the privacy of the data and improve the efficiency of training. In this view, this paper provides a brief overview of Fed-IoUT, highlighting its various applications, challenges, and opportunities.

Study on Evaluation Method of Task-Specific Adaptive Differential Privacy Mechanism in Federated Learning Environment (연합 학습 환경에서의 Task-Specific Adaptive Differential Privacy 메커니즘 평가 방안 연구)

  • Assem Utaliyeva;Yoon-Ho Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.143-156
    • /
    • 2024
  • Federated Learning (FL) has emerged as a potent methodology for decentralized model training across multiple collaborators, eliminating the need for data sharing. Although FL is lauded for its capacity to preserve data privacy, it is not impervious to various types of privacy attacks. Differential Privacy (DP), recognized as the golden standard in privacy-preservation techniques, is widely employed to counteract these vulnerabilities. This paper makes a specific contribution by applying an existing, task-specific adaptive DP mechanism to the FL environment. Our comprehensive analysis evaluates the impact of this mechanism on the performance of a shared global model, with particular attention to varying data distribution and partitioning schemes. This study deepens the understanding of the complex interplay between privacy and utility in FL, providing a validated methodology for securing data without compromising performance.

Gradient Leakage Defense Strategy based on Discrete Cosine Transform (이산 코사인 변환 기반 Gradient Leakage 방어 기법)

  • Park, Jae-hun;Kim, Kwang-su
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.2-4
    • /
    • 2021
  • In a distributed machine learning system, sharing gradients was considered safe because it did not share original training data. However, recent studies found that malicious attacker could completely restore the original training data from shared gradients. Gradient Leakage Attack is a technique that restoring original training data by exploiting theses vulnerability. In this study, we present the image transformation method based on Discrete Cosine Transform to defend against the Gradient Leakage Attack on the federated learning setting, which training in local devices and sharing gradients to the server. Experiment shows that our image transformation method cannot be completely restored the original data from Gradient Leakage Attack.

  • PDF

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

AI Model Repository for Realizing IoT On-device AI (IoT 온디바이스 AI 실현을 위한 AI 모델 레포지토리)

  • Lee, Seokjun;Choe, Chungjae;Sung, Nakmyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.597-599
    • /
    • 2022
  • When IoT device performs on-device AI, the device is required to use various AI models selectively according to target service and surrounding environment. Also, AI model can be updated by additional training such as federated learning or adapting the improved technique. Hence, for successful on-device AI, IoT device should acquire various AI models selectively or update previous AI model to new one. In this paper, we propose AI model repository to tackle this issue. The repository supports AI model registration, searching, management, and deployment along with dashboard for practical usage. We implemented it using Node.js and Vue.js to verify it works well.

  • PDF