• Title/Summary/Keyword: Differential privacy

Search Result 44, Processing Time 0.021 seconds

Concealment of iris features based on artificial noises

  • Jiao, Wenming;Zhang, Heng;Zang, Qiyan;Xu, Weiwei;Zhang, Shuaiwei;Zhang, Jian;Li, Hongran
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.599-607
    • /
    • 2019
  • Although iris recognition verification is considered to be the safest method of biometric verification, studies have shown that iris features may be illegally used. To protect iris features and further improve the security of iris recognition and verification, this study applies the Gaussian and Laplacian mechanisms and to hide iris features by differentiating privacy. The efficiency of the algorithm and evaluation of the image quality by the image hashing algorithm are selected as indicators to evaluate these mechanisms. The experimental results indicate that the security of an iris image can be significantly improved using differential privacy protection.

Privacy-Preserving Method to Collect Health Data from Smartband

  • Moon, Su-Mee;Kim, Jong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.113-121
    • /
    • 2020
  • With the rapid development of information and communication technology (ICT), various sensors are being embedded in wearable devices. Consequently, these devices can continuously collect data including health data from individuals. The collected health data can be used not only for healthcare services but also for analyzing an individual's lifestyle by combining with other external data. This helps in making an individual's life more convenient and healthier. However, collecting health data may lead to privacy issues since the data is personal, and can reveal sensitive insights about the individual. Thus, in this paper, we present a method to collect an individual's health data from a smart band in a privacy-preserving manner. We leverage the local differential privacy to achieve our goal. Additionally, we propose a way to find feature points from health data. This allows for an effective trade-off between the degree of privacy and accuracy. We carry out experiments to demonstrate the effectiveness of our proposed approach and the results show that, with the proposed method, the error rate can be reduced upto 77%.

Statistical disclosure control for public microdata: present and future (마이크로데이터 공표를 위한 통계적 노출제어 방법론 고찰)

  • Park, Min-Jeong;Kim, Hang J.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1041-1059
    • /
    • 2016
  • The increasing demand from researchers and policy makers for microdata has also increased related privacy and security concerns. During the past two decades, a large volume of literature on statistical disclosure control (SDC) has been published in international journals. This review paper introduces relatively recent SDC approaches to the communities of Korean statisticians and statistical agencies. In addition to the traditional masking techniques (such as microaggregation and noise addition), we introduce an online analytic system, differential privacy, and synthetic data. For each approach, the application example (with pros and cons, as well as methodology) is highlighted, so that the paper can assist statical agencies that seek a practical SDC approach.

An Enhanced Data Utility Framework for Privacy-Preserving Location Data Collection

  • Jong Wook Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.69-76
    • /
    • 2024
  • Recent advances in sensor and mobile technologies have made it possible to collect user location data. This location information is used as a valuable asset in various industries, resulting in increased demand for location data collection and sharing. However, because location data contains sensitive user information, indiscriminate collection can lead to privacy issues. Recently, geo-indistinguishability (Geo-I), a method of differential privacy, has been widely used to protect the privacy of location data. While Geo-I is powerful in effectively protecting users' locations, it poses a problem because the utility of the collected location data decreases due to data perturbation. Therefore, this paper proposes a method using Geo-I technology to effectively collect user location data while maintaining its data utility. The proposed method utilizes the prior distribution of users to improve the overall data utility, while protecting accurate location information. Experimental results using real data show that the proposed method significantly improves the usefulness of the collected data compared to existing methods.

Performance Analysis of Perturbation-based Privacy Preserving Techniques: An Experimental Perspective

  • Ritu Ratra;Preeti Gulia;Nasib Singh Gill
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.81-88
    • /
    • 2023
  • In the present scenario, enormous amounts of data are produced every second. These data also contain private information from sources including media platforms, the banking sector, finance, healthcare, and criminal histories. Data mining is a method for looking through and analyzing massive volumes of data to find usable information. Preserving personal data during data mining has become difficult, thus privacy-preserving data mining (PPDM) is used to do so. Data perturbation is one of the several tactics used by the PPDM data privacy protection mechanism. In Perturbation, datasets are perturbed in order to preserve personal information. Both data accuracy and data privacy are addressed by it. This paper will explore and compare several perturbation strategies that may be used to protect data privacy. For this experiment, two perturbation techniques based on random projection and principal component analysis were used. These techniques include Improved Random Projection Perturbation (IRPP) and Enhanced Principal Component Analysis based Technique (EPCAT). The Naive Bayes classification algorithm is used for data mining approaches. These methods are employed to assess the precision, run time, and accuracy of the experimental results. The best perturbation method in the Nave-Bayes classification is determined to be a random projection-based technique (IRPP) for both the cardiovascular and hypothyroid datasets.

DRM-FL: A Decentralized and Randomized Mechanism for Privacy Protection in Cross-Silo Federated Learning Approach (DRM-FL: Cross-Silo Federated Learning 접근법의 프라이버시 보호를 위한 분산형 랜덤화 메커니즘)

  • Firdaus, Muhammad;Latt, Cho Nwe Zin;Aguilar, Mariz;Rhee, Kyung-Hyune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.264-267
    • /
    • 2022
  • Recently, federated learning (FL) has increased prominence as a viable approach for enhancing user privacy and data security by allowing collaborative multi-party model learning without exchanging sensitive data. Despite this, most present FL systems still depend on a centralized aggregator to generate a global model by gathering all submitted models from users, which could expose user privacy and the risk of various threats from malicious users. To solve these issues, we suggested a safe FL framework that employs differential privacy to counter membership inference attacks during the collaborative FL model training process and empowers blockchain to replace the centralized aggregator server.

Privacy Model Recommendation System Based on Data Feature Analysis

  • Seung Hwan Ryu;Yongki Hong;Gihyuk Ko;Heedong Yang;Jong Wan Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.81-92
    • /
    • 2023
  • A privacy model is a technique that quantitatively restricts the possibility and degree of privacy breaches through privacy attacks. Representative models include k-anonymity, l-diversity, t-closeness, and differential privacy. While many privacy models have been studied, research on selecting the most suitable model for a given dataset has been relatively limited. In this study, we develop a system for recommending the suitable privacy model to prevent privacy breaches. To achieve this, we analyze the data features that need to be considered when selecting a model, such as data type, distribution, frequency, and range. Based on privacy model background knowledge that includes information about the relationships between data features and models, we recommend the most appropriate model. Finally, we validate the feasibility and usefulness by implementing a recommendation prototype system.

Collecting Health Data from Wearable Devices by Leveraging Salient Features in a Privacy-Preserving Manner

  • Moon, Su-Mee;Kim, Jong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.10
    • /
    • pp.59-67
    • /
    • 2020
  • With the development of wearable devices, individuals' health status can be checked in real time and risks can be predicted. For example, an application has been developed to detect an emergency situation of a patient with heart disease and contact a guardian through analysis of health data such as heart rate and electrocardiogram. However, health data is seriously damaging when it is leaked as it relates to life. Therefore, a method to protect personal information is essential in collecting health data, and this study proposes a method of collecting data while protecting the personal information of the data owner through a LDP(Local Differential Privacy). The previous study introduced a technique of transmitting feature point data rather than all data to a data collector as an algorithm for searching for fixed k feature points. Next, this study will explain how to improve the performance by up to 75% using an algorithm that finds the optimal number of feature points k.

Analysis of privacy issues and countermeasures in neural network learning (신경망 학습에서 프라이버시 이슈 및 대응방법 분석)

  • Hong, Eun-Ju;Lee, Su-Jin;Hong, Do-won;Seo, Chang-Ho
    • Journal of Digital Convergence
    • /
    • v.17 no.7
    • /
    • pp.285-292
    • /
    • 2019
  • With the popularization of PC, SNS and IoT, a lot of data is generated and the amount is increasing exponentially. Artificial neural network learning is a topic that attracts attention in many fields in recent years by using huge amounts of data. Artificial neural network learning has shown tremendous potential in speech recognition and image recognition, and is widely applied to a variety of complex areas such as medical diagnosis, artificial intelligence games, and face recognition. The results of artificial neural networks are accurate enough to surpass real human beings. Despite these many advantages, privacy problems still exist in artificial neural network learning. Learning data for artificial neural network learning includes various information including personal sensitive information, so that privacy can be exposed due to malicious attackers. There is a privacy risk that occurs when an attacker interferes with learning and degrades learning or attacks a model that has completed learning. In this paper, we analyze the attack method of the recently proposed neural network model and its privacy protection method.

Privacy-Preserving Cloud Data Security: Integrating the Novel Opacus Encryption and Blockchain Key Management

  • S. Poorani;R. Anitha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3182-3203
    • /
    • 2023
  • With the growing adoption of cloud-based technologies, maintaining the privacy and security of cloud data has become a pressing issue. Privacy-preserving encryption schemes are a promising approach for achieving cloud data security, but they require careful design and implementation to be effective. The integrated approach to cloud data security that we suggest in this work uses CogniGate: the orchestrated permissions protocol, index trees, blockchain key management, and unique Opacus encryption. Opacus encryption is a novel homomorphic encryption scheme that enables computation on encrypted data, making it a powerful tool for cloud data security. CogniGate Protocol enables more flexibility and control over access to cloud data by allowing for fine-grained limitations on access depending on user parameters. Index trees provide an efficient data structure for storing and retrieving encrypted data, while blockchain key management ensures the secure and decentralized storage of encryption keys. Performance evaluation focuses on key aspects, including computation cost for the data owner, computation cost for data sharers, the average time cost of index construction, query consumption for data providers, and time cost in key generation. The results highlight that the integrated approach safeguards cloud data while preserving privacy, maintaining usability, and demonstrating high performance. In addition, we explore the role of differential privacy in our integrated approach, showing how it can be used to further enhance privacy protection without compromising performance. We also discuss the key management challenges associated with our approach and propose a novel blockchain-based key management system that leverages smart contracts and consensus mechanisms to ensure the secure and decentralized storage of encryption keys.