• Title/Summary/Keyword: Privacy-Preserving Machine Learning

Search Result 10, Processing Time 0.024 seconds

Privacy-Preserving in the Context of Data Mining and Deep Learning

  • Altalhi, Amjaad;AL-Saedi, Maram;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.137-142
    • /
    • 2021
  • Machine-learning systems have proven their worth in various industries, including healthcare and banking, by assisting in the extraction of valuable inferences. Information in these crucial sectors is traditionally stored in databases distributed across multiple environments, making accessing and extracting data from them a tough job. To this issue, we must add that these data sources contain sensitive information, implying that the data cannot be shared outside of the head. Using cryptographic techniques, Privacy-Preserving Machine Learning (PPML) helps solve this challenge, enabling information discovery while maintaining data privacy. In this paper, we talk about how to keep your data mining private. Because Data mining has a wide variety of uses, including business intelligence, medical diagnostic systems, image processing, web search, and scientific discoveries, and we discuss privacy-preserving in deep learning because deep learning (DL) exhibits exceptional exactitude in picture detection, Speech recognition, and natural language processing recognition as when compared to other fields of machine learning so that it detects the existence of any error that may occur to the data or access to systems and add data by unauthorized persons.

Systematic Research on Privacy-Preserving Distributed Machine Learning (프라이버시를 보호하는 분산 기계 학습 연구 동향)

  • Min Seob Lee;Young Ah Shin;Ji Young Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.76-90
    • /
    • 2024
  • Although artificial intelligence (AI) can be utilized in various domains such as smart city, healthcare, it is limited due to concerns about the exposure of personal and sensitive information. In response, the concept of distributed machine learning has emerged, wherein learning occurs locally before training a global model, mitigating the concentration of data on a central server. However, overall learning phase in a collaborative way among multiple participants poses threats to data privacy. In this paper, we systematically analyzes recent trends in privacy protection within the realm of distributed machine learning, considering factors such as the presence of a central server, distribution environment of the training datasets, and performance variations among participants. In particular, we focus on key distributed machine learning techniques, including horizontal federated learning, vertical federated learning, and swarm learning. We examine privacy protection mechanisms within these techniques and explores potential directions for future research.

Privacy-Preserving K-means Clustering using Homomorphic Encryption in a Multiple Clients Environment (다중 클라이언트 환경에서 동형 암호를 이용한 프라이버시 보장형 K-평균 클러스터링)

  • Kwon, Hee-Yong;Im, Jong-Hyuk;Lee, Mun-Kyu
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.4
    • /
    • pp.7-17
    • /
    • 2019
  • Machine learning is one of the most accurate techniques to predict and analyze various phenomena. K-means clustering is a kind of machine learning technique that classifies given data into clusters of similar data. Because it is desirable to perform an analysis based on a lot of data for better performance, K-means clustering can be performed in a model with a server that calculates the centroids of the clusters, and a number of clients that provide data to server. However, this model has the problem that if the clients' data are associated with private information, the server can infringe clients' privacy. In this paper, to solve this problem in a model with a number of clients, we propose a privacy-preserving K-means clustering method that can perform machine learning, concealing private information using homomorphic encryption.

Secure Training Support Vector Machine with Partial Sensitive Part

  • Park, Saerom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • In this paper, we propose a training algorithm of support vector machine (SVM) with a sensitive variable. Although machine learning models enable automatic decision making in the real world applications, regulations prohibit sensitive information from being used to protect privacy. In particular, the privacy protection of the legally protected attributes such as race, gender, and disability is compulsory. We present an efficient least square SVM (LSSVM) training algorithm using a fully homomorphic encryption (FHE) to protect a partial sensitive attribute. Our framework posits that data owner has both non-sensitive attributes and a sensitive attribute while machine learning service provider (MLSP) can get non-sensitive attributes and an encrypted sensitive attribute. As a result, data owner can obtain the encrypted model parameters without exposing their sensitive information to MLSP. In the inference phase, both non-sensitive attributes and a sensitive attribute are encrypted, and all computations should be conducted on encrypted domain. Through the experiments on real data, we identify that our proposed method enables to implement privacy-preserving sensitive LSSVM with FHE that has comparable performance with the original LSSVM algorithm. In addition, we demonstrate that the efficient sensitive LSSVM with FHE significantly improves the computational cost with a small degradation of performance.

A Study on Privacy Preserving Machine Learning (프라이버시 보존 머신러닝의 연구 동향)

  • Han, Woorim;Lee, Younghan;Jun, Sohee;Cho, Yungi;Paek, Yunheung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.924-926
    • /
    • 2021
  • AI (Artificial Intelligence) is being utilized in various fields and services to give convenience to human life. Unfortunately, there are many security vulnerabilities in today's ML (Machine Learning) systems, causing various privacy concerns as some AI models need individuals' private data to train them. Such concerns lead to the interest in ML systems which can preserve the privacy of individuals' data. This paper introduces the latest research on various attacks that infringe data privacy and the corresponding defense techniques.

TPMP: A Privacy-Preserving Technique for DNN Prediction Using ARM TrustZone (TPMP : ARM TrustZone을 활용한 DNN 추론 과정의 기밀성 보장 기술)

  • Song, Suhyeon;Park, Seonghwan;Kwon, Donghyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.3
    • /
    • pp.487-499
    • /
    • 2022
  • Machine learning such as deep learning have been widely used in recent years. Recently deep learning is performed in a trusted execution environment such as ARM TrustZone to improve security in edge devices and embedded devices with low computing resource. To mitigate this problem, we propose TPMP that efficiently uses the limited memory of TEE through DNN model partitioning. TPMP achieves high confidentiality of DNN by performing DNN models that could not be run with existing memory scheduling methods in TEE through optimized memory scheduling. TPMP required a similar amount of computational resources to previous methodologies.

Edge Computing Model based on Federated Learning for COVID-19 Clinical Outcome Prediction in the 5G Era

  • Ruochen Huang;Zhiyuan Wei;Wei Feng;Yong Li;Changwei Zhang;Chen Qiu;Mingkai Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.826-842
    • /
    • 2024
  • As 5G and AI continue to develop, there has been a significant surge in the healthcare industry. The COVID-19 pandemic has posed immense challenges to the global health system. This study proposes an FL-supported edge computing model based on federated learning (FL) for predicting clinical outcomes of COVID-19 patients during hospitalization. The model aims to address the challenges posed by the pandemic, such as the need for sophisticated predictive models, privacy concerns, and the non-IID nature of COVID-19 data. The model utilizes the FATE framework, known for its privacy-preserving technologies, to enhance predictive precision while ensuring data privacy and effectively managing data heterogeneity. The model's ability to generalize across diverse datasets and its adaptability in real-world clinical settings are highlighted by the use of SHAP values, which streamline the training process by identifying influential features, thus reducing computational overhead without compromising predictive precision. The study demonstrates that the proposed model achieves comparable precision to specific machine learning models when dataset sizes are identical and surpasses traditional models when larger training data volumes are employed. The model's performance is further improved when trained on datasets from diverse nodes, leading to superior generalization and overall performance, especially in scenarios with insufficient node features. The integration of FL with edge computing contributes significantly to the reliable prediction of COVID-19 patient outcomes with greater privacy. The research contributes to healthcare technology by providing a practical solution for early intervention and personalized treatment plans, leading to improved patient outcomes and efficient resource allocation during public health crises.

프라이버시 보존 분류 방법 동향 분석

  • Kim, Pyung;Moon, Su-Bin;Jo, Eun-Ji;Lee, Younho
    • Review of KIISC
    • /
    • v.27 no.3
    • /
    • pp.33-41
    • /
    • 2017
  • 기계 학습(machine-learning) 분야의 분류 알고리즘(classification algorithms)은 의료 진단, 유전자 정보 해석, 스팸 탐지, 얼굴 인식 및 신용 평가와 같은 다양한 응용 서비스에서 사용되고 있다. 이와 같은 응용 서비스에서의 분류 알고리즘은 사용자의 민감한 정보를 포함하는 데이터를 이용하여 학습을 수행하는 경우가 많으며, 분류 결과도 사용자의 프라이버시와 연관된 경우가 많다. 따라서 학습에 필요한 데이터의 소유자, 응용 서비스 사용자, 그리고 서비스 제공자가 서로 다른 보안 도메인에 존재할 경우, 프라이버시 보호 문제가 발생할 수 있다. 본 논문에서는 이러한 문제를 해결하면서도 분류 서비스를 제공할 수 있도록 도와주는 프라이버시 보존 분류 프로토콜(privacy-preserving classification protocol: PPCP) 에 대해 소개한다. 구체적으로 PPCP의 프라이버시 보호 요구사항을 분석하고, 기존의 연구들이 프라이버시 보호를 위해 사용하는 암호학적 기본 도구(cryptographic primitive)들에 대해 소개한다. 최종적으로 그러한 암호학적 기본 도구를 사용하여 설계된 프라이버시 보존 분류 프로토콜에 대한 기존 연구들을 소개하고 분석한다.

The Impact of Various Degrees of Composite Minimax ApproximatePolynomials on Convolutional Neural Networks over Fully HomomorphicEncryption (다양한 차수의 합성 미니맥스 근사 다항식이 완전 동형 암호 상에서의 컨볼루션 신경망 네트워크에 미치는 영향)

  • Junghyun Lee;Jong-Seon No
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.861-868
    • /
    • 2023
  • One of the key technologies in providing data analysis in the deep learning while maintaining security is fully homomorphic encryption. Due to constraints in operations on fully homomorphically encrypted data, non-arithmetic functions used in deep learning must be approximated by polynomials. Until now, the degrees of approximation polynomials with composite minimax polynomials have been uniformly set across layers, which poses challenges for effective network designs on fully homomorphic encryption. This study theoretically proves that setting different degrees of approximation polynomials constructed by composite minimax polynomial in each layer does not pose any issues in the inference on convolutional neural networks.

A STUDY OF USING CKKS HOMOMORPHIC ENCRYPTION OVER THE LAYERS OF A CONVOLUTIONAL NEURAL NETWORK MODEL

  • Castaneda, Sebastian Soler;Nam, Kevin;Joo, Youyeon;Paek, Yunheung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.161-164
    • /
    • 2022
  • Homomorphic Encryption (HE) schemes have been recently growing as a reliable solution to preserve users' information owe to maintaining and operating the user data in the encrypted state. In addition to that, several Neural Networks models merged with HE schemes have been developed as a prospective tool for privacy-preserving machine learning. Those mentioned works demonstrated that it is possible to match the accuracy of non-encrypted models but there is always a trade-off in the computation time. In this work, we evaluate the implementation of CKKS HE scheme operations over the layers of a LeNet5 convolutional inference model, however, owing to the limitations of the evaluation environment, the scope of this work is not to develop a complete LeNet5 encrypted model. The evaluation was performed using the MNIST dataset with Microsoft SEAL (MSEAL) open-source homomorphic encryption library ported version on Python (PyFhel). The behavior of the encrypted model, the limitations faced and a small description of related and future work is also provided.