• Title/Summary/Keyword: Privacy Protection Model

Search Result 180, Processing Time 0.019 seconds

A Privacy-preserving and Energy-efficient Offloading Algorithm based on Lyapunov Optimization

  • Chen, Lu;Tang, Hongbo;Zhao, Yu;You, Wei;Wang, Kai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2490-2506
    • /
    • 2022
  • In Mobile Edge Computing (MEC), attackers can speculate and mine sensitive user information by eavesdropping wireless channel status and offloading usage pattern, leading to user privacy leakage. To solve this problem, this paper proposes a Privacy-preserving and Energy-efficient Offloading Algorithm (PEOA) based on Lyapunov optimization. In this method, a continuous Markov process offloading model with a buffer queue strategy is built first. Then the amount of privacy of offloading usage pattern in wireless channel is defined. Finally, by introducing the Lyapunov optimization, the problem of minimum average energy consumption in continuous state transition process with privacy constraints in the infinite time domain is transformed into the minimum value problem of each timeslot, which reduces the complexity of algorithms and helps obtain the optimal solution while maintaining low energy consumption. The experimental results show that, compared with other methods, PEOA can maintain the amount of privacy accumulation in the system near zero, while sustaining low average energy consumption costs. This makes it difficult for attackers to infer sensitive user information through offloading usage patterns, thus effectively protecting user privacy and safety.

A Study on Synthetic Data Generation Based Safe Differentially Private GAN (차분 프라이버시를 만족하는 안전한 GAN 기반 재현 데이터 생성 기술 연구)

  • Kang, Junyoung;Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.945-956
    • /
    • 2020
  • The publication of data is essential in order to receive high quality services from many applications. However, if the original data is published as it is, there is a risk that sensitive information (political tendency, disease, ets.) may reveal. Therefore, many research have been proposed, not the original data but the synthetic data generating and publishing to privacy preserve. but, there is a risk of privacy leakage still even if simply generate and publish the synthetic data by various attacks (linkage attack, inference attack, etc.). In this paper, we propose a synthetic data generation algorithm in which privacy preserved by applying differential privacy the latest privacy protection technique to GAN, which is drawing attention as a synthetic data generative model in order to prevent the leakage of such sensitive information. The generative model used CGAN for efficient learning of labeled data, and applied Rényi differential privacy, which is relaxation of differential privacy, considering the utility aspects of the data. And validation of the utility of the generated data is conducted and compared through various classifiers.

Antecedents Affecting the Information Privacy Concerns in Personalized Recommendation Service of OTT

  • Yujin Kim;Hyung-Seok Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.4
    • /
    • pp.161-175
    • /
    • 2024
  • In this paper, we examined the causes of privacy concern and related factors in personalized recommendation service of OTT. On the basis of the 'Big Five Personality model,' we established factors such as agreeableness, neuroticism, conscientiousness, extraversion, and openness to experience. Additionally, we established factors such as accuracy, diversity, and novelty of OTT recommendation's services, and perceived transparency. we analyzed the relationship between privacy concern, service benefit, and intention to give personal information. Finally, we analyzed the mediating effect of service benefits on the relationship between privacy concern and intention to give personal information. The results of this study showed that (1) neuroticism, extraversion and openness to experience had the significant effects on privacy concerns, (2) perceived transparency had the significant effects on privacy concern, 3) privacy concern and service benefit had the significant effect on intention to give personal information, and (4) as a result of multi-group analysis towards low and high groups to verify the moderating effect by service benefits, a significant difference was observed between privacy concern and intention to give personal information. The findings of the study are expected to help the OTT firms' understanding towards users' privacy protection behaviors.

Tag Identification Process Model with Scalability for Protecting Privacy of RFID on the Grid Environment (그리드 환경에서 RFID 프라이버시 보호를 위한 확장성을 가지는 태그 판별 처리 모델)

  • Shin, Myeong-Sook;Kim, Choong-Woon;Lee, Joon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1010-1015
    • /
    • 2008
  • The choice of RFID system is recently progressing(being) rapidly at various field. For the sake of RFID system popularization, However, We should solve privacy invasion to gain the pirated information of RFID tag. There is the safest M Ohkubos's skill among preexistent studying to solve these problems. But, this skill has a problem that demands a immense calculation capability caused an increase in tag number when we discriminate tags. So, This paper proposes the way of transplant to Grid environment for keeping Privacy Protection up and reducing the Tag Identification Time. And, We propose the Tag Identification Process Model to apply Even Division Algorithm to separate SP with same site in each node. If the proposed model works in Grid environment at once, it would reduce the time to identify tags to 1/k.

QSDB: An Encrypted Database Model for Privacy-Preserving in Cloud Computing

  • Liu, Guoxiu;Yang, Geng;Wang, Haiwei;Dai, Hua;Zhou, Qiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3375-3400
    • /
    • 2018
  • With the advent of database-as-a-service (DAAS) and cloud computing, more and more data owners are motivated to outsource their data to cloud database in consideration of convenience and cost. However, it has become a challenging work to provide security to database as service model in cloud computing, because adversaries may try to gain access to sensitive data, and curious or malicious administrators may capture and leak data. In order to realize privacy preservation, sensitive data should be encrypted before outsourcing. In this paper, we present a secure and practical system over encrypted cloud data, called QSDB (queryable and secure database), which simultaneously supports SQL query operations. The proposed system can store and process the floating point numbers without compromising the security of data. To balance tradeoff between data privacy protection and query processing efficiency, QSDB utilizes three different encryption models to encrypt data. Our strategy is to process as much queries as possible at the cloud server. Encryption of queries and decryption of encrypted queries results are performed at client. Experiments on the real-world data sets were conducted to demonstrate the efficiency and practicality of the proposed system.

Privacy-Preserving Deep Learning using Collaborative Learning of Neural Network Model

  • Hye-Kyeong Ko
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.56-66
    • /
    • 2023
  • The goal of deep learning is to extract complex features from multidimensional data use the features to create models that connect input and output. Deep learning is a process of learning nonlinear features and functions from complex data, and the user data that is employed to train deep learning models has become the focus of privacy concerns. Companies that collect user's sensitive personal information, such as users' images and voices, own this data for indefinite period of times. Users cannot delete their personal information, and they cannot limit the purposes for which the data is used. The study has designed a deep learning method that employs privacy protection technology that uses distributed collaborative learning so that multiple participants can use neural network models collaboratively without sharing the input datasets. To prevent direct leaks of personal information, participants are not shown the training datasets during the model training process, unlike traditional deep learning so that the personal information in the data can be protected. The study used a method that can selectively share subsets via an optimization algorithm that is based on modified distributed stochastic gradient descent, and the result showed that it was possible to learn with improved learning accuracy while protecting personal information.

Development of a New Instrument to Measuring Concerns for Corporate Information Privacy Management (국내 기업개인정보보호 측정항목과 관리모형 개발에 관한 연구)

  • Lee, Sung-Joong;Lee, Young-Jai
    • Journal of Information Technology Applications and Management
    • /
    • v.16 no.4
    • /
    • pp.79-92
    • /
    • 2009
  • With the rising reliance on market estimation through customer analysis in customer-centered marketing, there is a rapid increase in the amount of personal data owned by corporations. There has been a corresponding rise in the customers' interest in personal information protection, and the problem of personal information leakage has risen as a serious issue. The purpose of this research is to develop a diagnosis model for personal information protection that is suited to our country's corporate environment, and on this basis, to present diagnostic instruments that can be applied to domestic corporations. This diagnosis model is a structural equation model that schematizes the degree of synthetic effect that administration factors and estimation items have on the protection of personal information owned by corporations. We develop the model- consisting of the administration factors for personal information protection and the measurement items of each factor- using the development method of standardized structural equation model. We then present a tool through which the administration factors and estimation items verified through this model can be used in the diagnosis for personal information protection in corporations. This diagnostic tool can be utilized as a useful instrument to prevent in advance the leakage of personal information in corporations.

  • PDF

Implemention of Location Information Privacy Self Control System (위치정보 프라이버시 자기제어 시스템의 구현)

  • Yang, Pyoung Woo;Nam, Kwang Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.211-214
    • /
    • 2009
  • This paper describes a location privacy control system. Also, we propose a model of location privacy control for national usage and system. This research is applicable to location privacy protection in commercial location-based services in wireless telecommunications, T-Money transtoration card system, GIS platform and provide system of variety service.

  • PDF

A study on the application of PbD considering the GDPR principle (GDPR원칙을 고려한 PbD 적용 방안에 관한 연구)

  • Youngcheon Yoo;Soonbeom Kwon;Hwansoo Lee
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.109-118
    • /
    • 2022
  • Countries around the world have recognized the importance of personal information protection and have discussed protecting the rights of data subjects in various forms such as laws, regulations, and guidelines. PbD (Privacy by Design) is one of the concepts that are commonly emphasized as a precautionary measure for the protection of personal information, and it is starting to attract attention as an essential element for protecting the privacy of information subjects. However, the concept of PbD to prioritize individual privacy in system development or service operation in advance is still only at the declarative level, so there is relatively little discussion on specific methods to implement it. Therefore, this study discusses which principles and rights should be prioritized to implement PbD based on the basic principles of GDPR and the rights of data subjects. This study is meaningful in that it suggests a plan for the practical implementation of PbD by presenting the privacy considerations that should be prioritized when developing systems or services in the domestic environment.

A Conditional Randomized Response Model for Detailed Survey

  • Lee, Gi-Sung;Hong, Ki-Hak
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.721-729
    • /
    • 2000
  • In this paper, we propose a new conditional randomized response model that has improved the Carr et al.'s model in view of he variance and the protection of privacy of respondents. We show that he suggested model is more effective and protective than the Loynes' model and Carr et al.' model.

  • PDF