• Title/Summary/Keyword: Data Privacy Model

Search Result 288, Processing Time 0.021 seconds

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.

Time Series Crime Prediction Using a Federated Machine Learning Model

  • Salam, Mustafa Abdul;Taha, Sanaa;Ramadan, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.119-130
    • /
    • 2022
  • Crime is a common social problem that affects the quality of life. As the number of crimes increases, it is necessary to build a model to predict the number of crimes that may occur in a given period, identify the characteristics of a person who may commit a particular crime, and identify places where a particular crime may occur. Data privacy is the main challenge that organizations face when building this type of predictive models. Federated learning (FL) is a promising approach that overcomes data security and privacy challenges, as it enables organizations to build a machine learning model based on distributed datasets without sharing raw data or violating data privacy. In this paper, a federated long short- term memory (LSTM) model is proposed and compared with a traditional LSTM model. Proposed model is developed using TensorFlow Federated (TFF) and the Keras API to predict the number of crimes. The proposed model is applied on the Boston crime dataset. The proposed model's parameters are fine tuned to obtain minimum loss and maximum accuracy. The proposed federated LSTM model is compared with the traditional LSTM model and found that the federated LSTM model achieved lower loss, better accuracy, and higher training time than the traditional LSTM model.

A Study on Synthetic Data Generation Based Safe Differentially Private GAN (차분 프라이버시를 만족하는 안전한 GAN 기반 재현 데이터 생성 기술 연구)

  • Kang, Junyoung;Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.945-956
    • /
    • 2020
  • The publication of data is essential in order to receive high quality services from many applications. However, if the original data is published as it is, there is a risk that sensitive information (political tendency, disease, ets.) may reveal. Therefore, many research have been proposed, not the original data but the synthetic data generating and publishing to privacy preserve. but, there is a risk of privacy leakage still even if simply generate and publish the synthetic data by various attacks (linkage attack, inference attack, etc.). In this paper, we propose a synthetic data generation algorithm in which privacy preserved by applying differential privacy the latest privacy protection technique to GAN, which is drawing attention as a synthetic data generative model in order to prevent the leakage of such sensitive information. The generative model used CGAN for efficient learning of labeled data, and applied Rényi differential privacy, which is relaxation of differential privacy, considering the utility aspects of the data. And validation of the utility of the generated data is conducted and compared through various classifiers.

Hybrid Recommendation Algorithm for User Satisfaction-oriented Privacy Model

  • Sun, Yinggang;Zhang, Hongguo;Zhang, Luogang;Ma, Chao;Huang, Hai;Zhan, Dongyang;Qu, Jiaxing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3419-3437
    • /
    • 2022
  • Anonymization technology is an important technology for privacy protection in the process of data release. Usually, before publishing data, the data publisher needs to use anonymization technology to anonymize the original data, and then publish the anonymized data. However, for data publishers who do not have or have less anonymized technical knowledge background, how to configure appropriate parameters for data with different characteristics has become a more difficult problem. In response to this problem, this paper adds a historical configuration scheme resource pool on the basis of the traditional anonymization process, and configuration parameters can be automatically recommended through the historical configuration scheme resource pool. On this basis, a privacy model hybrid recommendation algorithm for user satisfaction is formed. The algorithm includes a forward recommendation process and a reverse recommendation process, which can respectively perform data anonymization processing for users with different anonymization technical knowledge backgrounds. The privacy model hybrid recommendation algorithm for user satisfaction described in this paper is suitable for a wider population, providing a simpler, more efficient and automated solution for data anonymization, reducing data processing time and improving the quality of anonymized data, which enhances data protection capabilities.

Privacy Control Using GRBAC In An Extended Role-Based Access Control Model (확장된 역할기반 접근제어 모델에서 GRBAC을 이용한 프라이버시 제어)

  • Park Chong hwa;Kim Ji hong;Kim Dong kyoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.167-175
    • /
    • 2005
  • Privacy enforcement has been one of the most important problems in IT area. Privacy protection can be achieved by enforcing privacy policies within an organization's online and offline data processing systems. Traditional security models are more or less inappropriate for enforcing basic privacy requirements, such as purpose binding. This paper proposes a new approach in which a privacy control model is derived from integration of an existing security model. To this, we use an extended role-based access control model for existing security mechanism, in which this model provides context-based access control by combining RBAC and domain-type enforcement. For implementation of privacy control model we use GRBAC(Generalized Role-Based Access Control), which is expressive enough to deal with privacy preference. And small hospital model is considered for application of this model.

A Study on Privacy Attitude and Protection Intent of MyData Users: The Effect of Privacy cynicism (마이데이터 이용자의 프라이버시 태도와 보호의도에 관한 연구: 프라이버시 냉소주의의 영향)

  • Jung, Hae-Jin;Lee, Jin-Hyuk
    • Informatization Policy
    • /
    • v.29 no.2
    • /
    • pp.37-65
    • /
    • 2022
  • This article analyzes the relationship between the privacy attitudes of MyData users and the four dimensions of privacy cynicism (distrust, uncertainty, powerlessness, and resignation) as to privacy protection intentions through a structural equation model. It was examined that MyData user's internet skills had a statistically significant negative effect on 'resignation' among the privacy cynicism dimensions. Secondly, privacy risks have a positive effect on 'distrust' in MyData operators, 'uncertainty' in privacy control, and 'powerlessness' in terms of privacy cynicism. Thirdly, it was analyzed that privacy concerns have a positive effect on the privacy cynicism dimensions of 'distrust' and 'uncertainty', with 'resignation' showing a negative effect. Fourthly, it was found that only 'resignation' as a dimension of privacy cynicism showed a negative effect on privacy protection intention. Overall, MyData user's internet skills was analyzed as a variable that could alleviate privacy cynicism. Privacy risks are a variable that reinforces privacy cynicism, and privacy concerns reinforce privacy cynicism. In terms of privacy cynicism, 'resignation' offsets privacy concerns and lowers privacy protection intentions.

Case Study on Local Differential Privacy in Practice : Privacy Preserving Survey (로컬 차분 프라이버시 실제 적용 사례연구 : 프라이버시 보존형 설문조사)

  • Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.141-156
    • /
    • 2020
  • Differential privacy, which used to collect and analysis data and preserve data privacy, has been applied widely in data privacy preserving data application. Local differential privacy algorithm which is the local model of differential privacy is used to user who add noise to his data himself with randomized response by self and release his own data. So, user can be preserved his data privacy and data analyst can make a statistical useful data by collected many data. Local differential privacy method has been used by global companies which are Google, Apple and Microsoft to collect and analyze data from users. In this paper, we compare and analyze the local differential privacy methods which used in practically. And then, we study applicability that applying the local differential privacy method in survey or opinion poll scenario in practically.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

A Privacy-aware Graph-based Access Control System for the Healthcare Domain

  • Tian, Yuan;Song, Biao;Hassan, M.Mehedi.;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2708-2730
    • /
    • 2012
  • The growing concern for the protection of personal information has made it critical to implement effective technologies for privacy and data management. By observing the limitations of existing approaches, we found that there is an urgent need for a flexible, privacy-aware system that is able to meet the privacy preservation needs at both the role levels and the personal levels. We proposed a conceptual system that considered these two requirements: a graph-based, access control model to safeguard patient privacy. We present a case study of the healthcare field in this paper. While our model was tested in the field of healthcare, it is generic and can be adapted to use in other fields. The proof-of-concept demos were also provided with the aim of valuating the efficacy of our system. In the end, based on the hospital scenarios, we present the experimental results to demonstrate the performance of our system, and we also compared those results to existing privacy-aware systems. As a result, we ensured a high quality of medical care service by preserving patient privacy.

Mobile Payment Use in Light of Privacy Protection and Provider's Market Control

  • Mohammad Bakhsh;Hyein Jeong;Lingyu Zhao;One-Ki Daniel Lee
    • Asia pacific journal of information systems
    • /
    • v.31 no.3
    • /
    • pp.257-276
    • /
    • 2021
  • This study investigates the factors that facilitate or hinder people to use mobile payment, especially drawing upon the theoretical perspectives on individual's privacy protection motivation and perceived market condition. Survey data (n = 200) were collected through a web-based platform and used to test a theoretical model. The results show that one's privacy protection power is formed by various individual and technological factors (i.e., perceived data exposure, self-efficacy, and response efficacy), and in turn it determines his/her intention to use mobile payment. Moreover, the relationship between privacy protection power and mobile payment use is conditional on the perceived market control by the service provider - with a perception of the high level of provider's market control, one uses mobile payment regardless of his/her privacy protection power, while under the low level of provider's market control, the decision depends on the degree of privacy protection power. The findings would help our understanding of why some people are more susceptible to mobile payment and others are not.