• Title/Summary/Keyword: Differential-privacy

Search Result 48, Processing Time 0.025 seconds

A parametric bootstrap test for comparing differentially private histograms (모수적 부트스트랩을 이용한 차등정보보호 히스토그램의 동질성 검정)

  • Son, Juhee;Park, Min-Jeong;Jung, Sungkyu
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.1-17
    • /
    • 2022
  • We propose a test of consistency for two differentially private histograms using parametric bootstrap. The test can be applied when the original raw histograms are not available but only the differentially private histograms and the privacy level α are available. We also extend the test for the case where the privacy levels are different for different histograms. The resident population data of Korea and U.S in year 2020 are used to demonstrate the efficacy of the proposed test procedure. The proposed test controls the type I error rate at the nominal level and has a high power, while a conventional test procedure fails. While the differential privacy framework formally controls the risk of privacy leakage, the utility of such framework is questionable. This work also suggests that the power of a carefully designed test may be a viable measure of utility.

New Higher-Order Differential Computation Analysis on Masked White-Box AES (마스킹 화이트 박스 AES에 대한 새로운 고차 차분 계산 분석 기법)

  • Lee, Yechan;Jin, Sunghyun;Kim, Hanbit;Kim, HeeSeok;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.1-15
    • /
    • 2020
  • As differential computation analysis attack(DCA) which is context of side-channel analysis on white-box cryptography is proposed, masking white-box cryptography based on table encoding has been proposed by Lee et al. to counter DCA. Existing higher-order DCA for the masked white box cryptography did not consider the masking implementation structure based on table encoding, so it is impossible to apply this attack on the countermeasure suggested by Lee et al. In this paper, we propose a new higher-order DCA method that can be applied to the implementation of masking based on table encoding, and prove its effectiveness by finding secret key information of masking white-box cryptography suggested by Lee et al. in practice.

Concealment of iris features based on artificial noises

  • Jiao, Wenming;Zhang, Heng;Zang, Qiyan;Xu, Weiwei;Zhang, Shuaiwei;Zhang, Jian;Li, Hongran
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.599-607
    • /
    • 2019
  • Although iris recognition verification is considered to be the safest method of biometric verification, studies have shown that iris features may be illegally used. To protect iris features and further improve the security of iris recognition and verification, this study applies the Gaussian and Laplacian mechanisms and to hide iris features by differentiating privacy. The efficiency of the algorithm and evaluation of the image quality by the image hashing algorithm are selected as indicators to evaluate these mechanisms. The experimental results indicate that the security of an iris image can be significantly improved using differential privacy protection.

Privacy-Preserving Method to Collect Health Data from Smartband

  • Moon, Su-Mee;Kim, Jong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.113-121
    • /
    • 2020
  • With the rapid development of information and communication technology (ICT), various sensors are being embedded in wearable devices. Consequently, these devices can continuously collect data including health data from individuals. The collected health data can be used not only for healthcare services but also for analyzing an individual's lifestyle by combining with other external data. This helps in making an individual's life more convenient and healthier. However, collecting health data may lead to privacy issues since the data is personal, and can reveal sensitive insights about the individual. Thus, in this paper, we present a method to collect an individual's health data from a smart band in a privacy-preserving manner. We leverage the local differential privacy to achieve our goal. Additionally, we propose a way to find feature points from health data. This allows for an effective trade-off between the degree of privacy and accuracy. We carry out experiments to demonstrate the effectiveness of our proposed approach and the results show that, with the proposed method, the error rate can be reduced upto 77%.

Statistical disclosure control for public microdata: present and future (마이크로데이터 공표를 위한 통계적 노출제어 방법론 고찰)

  • Park, Min-Jeong;Kim, Hang J.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1041-1059
    • /
    • 2016
  • The increasing demand from researchers and policy makers for microdata has also increased related privacy and security concerns. During the past two decades, a large volume of literature on statistical disclosure control (SDC) has been published in international journals. This review paper introduces relatively recent SDC approaches to the communities of Korean statisticians and statistical agencies. In addition to the traditional masking techniques (such as microaggregation and noise addition), we introduce an online analytic system, differential privacy, and synthetic data. For each approach, the application example (with pros and cons, as well as methodology) is highlighted, so that the paper can assist statical agencies that seek a practical SDC approach.

An Enhanced Data Utility Framework for Privacy-Preserving Location Data Collection

  • Jong Wook Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.69-76
    • /
    • 2024
  • Recent advances in sensor and mobile technologies have made it possible to collect user location data. This location information is used as a valuable asset in various industries, resulting in increased demand for location data collection and sharing. However, because location data contains sensitive user information, indiscriminate collection can lead to privacy issues. Recently, geo-indistinguishability (Geo-I), a method of differential privacy, has been widely used to protect the privacy of location data. While Geo-I is powerful in effectively protecting users' locations, it poses a problem because the utility of the collected location data decreases due to data perturbation. Therefore, this paper proposes a method using Geo-I technology to effectively collect user location data while maintaining its data utility. The proposed method utilizes the prior distribution of users to improve the overall data utility, while protecting accurate location information. Experimental results using real data show that the proposed method significantly improves the usefulness of the collected data compared to existing methods.

Performance Analysis of Perturbation-based Privacy Preserving Techniques: An Experimental Perspective

  • Ritu Ratra;Preeti Gulia;Nasib Singh Gill
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.81-88
    • /
    • 2023
  • In the present scenario, enormous amounts of data are produced every second. These data also contain private information from sources including media platforms, the banking sector, finance, healthcare, and criminal histories. Data mining is a method for looking through and analyzing massive volumes of data to find usable information. Preserving personal data during data mining has become difficult, thus privacy-preserving data mining (PPDM) is used to do so. Data perturbation is one of the several tactics used by the PPDM data privacy protection mechanism. In Perturbation, datasets are perturbed in order to preserve personal information. Both data accuracy and data privacy are addressed by it. This paper will explore and compare several perturbation strategies that may be used to protect data privacy. For this experiment, two perturbation techniques based on random projection and principal component analysis were used. These techniques include Improved Random Projection Perturbation (IRPP) and Enhanced Principal Component Analysis based Technique (EPCAT). The Naive Bayes classification algorithm is used for data mining approaches. These methods are employed to assess the precision, run time, and accuracy of the experimental results. The best perturbation method in the Nave-Bayes classification is determined to be a random projection-based technique (IRPP) for both the cardiovascular and hypothyroid datasets.

DRM-FL: A Decentralized and Randomized Mechanism for Privacy Protection in Cross-Silo Federated Learning Approach (DRM-FL: Cross-Silo Federated Learning 접근법의 프라이버시 보호를 위한 분산형 랜덤화 메커니즘)

  • Firdaus, Muhammad;Latt, Cho Nwe Zin;Aguilar, Mariz;Rhee, Kyung-Hyune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.264-267
    • /
    • 2022
  • Recently, federated learning (FL) has increased prominence as a viable approach for enhancing user privacy and data security by allowing collaborative multi-party model learning without exchanging sensitive data. Despite this, most present FL systems still depend on a centralized aggregator to generate a global model by gathering all submitted models from users, which could expose user privacy and the risk of various threats from malicious users. To solve these issues, we suggested a safe FL framework that employs differential privacy to counter membership inference attacks during the collaborative FL model training process and empowers blockchain to replace the centralized aggregator server.

Privacy Model Recommendation System Based on Data Feature Analysis

  • Seung Hwan Ryu;Yongki Hong;Gihyuk Ko;Heedong Yang;Jong Wan Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.81-92
    • /
    • 2023
  • A privacy model is a technique that quantitatively restricts the possibility and degree of privacy breaches through privacy attacks. Representative models include k-anonymity, l-diversity, t-closeness, and differential privacy. While many privacy models have been studied, research on selecting the most suitable model for a given dataset has been relatively limited. In this study, we develop a system for recommending the suitable privacy model to prevent privacy breaches. To achieve this, we analyze the data features that need to be considered when selecting a model, such as data type, distribution, frequency, and range. Based on privacy model background knowledge that includes information about the relationships between data features and models, we recommend the most appropriate model. Finally, we validate the feasibility and usefulness by implementing a recommendation prototype system.

Highly Reliable Differential Privacy Technique Utilizing Error Correction Encoding (오류 정정 부호를 활용한 고신뢰 차등 프라이버시 기법)

  • Seung-ha Ji;So-Eun Jeon;Il-Gu Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.243-244
    • /
    • 2024
  • IoT 장치의 개수가 급증함에 따라 네트워크 환경에서 송수신되는 데이터 양이 증가하였고, 이에 따라 데이터 전송과정의 보안 강화가 중요해지고 있다. 기존에는 데이터에 인공 노이즈를 추가하는 차등 프라이버시 기법(Differential Privacy, DP)을 적용하여 데이터를 보호하고 있다. 하지만 DP가 적용된 데이터를 수신하는 정상 사용자의 머신러닝 학습 정확도가 감소되는 문제가 있다. 본 논문에서는 고신뢰 데이터 전송을 위한 데이터 인코딩 기반의 DP 기법인 EN-DP (Encoding-based DP) 모델을 제안한다. 실험 결과에 따르면, EN-DP 를 통한 정상 사용자와 공격자 간의 학습 능력 정확도 간극을 종래 모델 대비 최대 17.16% 개선할 수 있음을 입증하였다.