• Title/Summary/Keyword: Data Privacy

Search Result 1,246, Processing Time 0.025 seconds

Privacy Model Recommendation System Based on Data Feature Analysis

  • Seung Hwan Ryu;Yongki Hong;Gihyuk Ko;Heedong Yang;Jong Wan Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.81-92
    • /
    • 2023
  • A privacy model is a technique that quantitatively restricts the possibility and degree of privacy breaches through privacy attacks. Representative models include k-anonymity, l-diversity, t-closeness, and differential privacy. While many privacy models have been studied, research on selecting the most suitable model for a given dataset has been relatively limited. In this study, we develop a system for recommending the suitable privacy model to prevent privacy breaches. To achieve this, we analyze the data features that need to be considered when selecting a model, such as data type, distribution, frequency, and range. Based on privacy model background knowledge that includes information about the relationships between data features and models, we recommend the most appropriate model. Finally, we validate the feasibility and usefulness by implementing a recommendation prototype system.

Privacy Level Indicating Data Leakage Prevention System

  • Kim, Jinhyung;Park, Choonsik;Hwang, Jun;Kim, Hyung-Jong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.558-575
    • /
    • 2013
  • The purpose of a data leakage prevention system is to protect corporate information assets. The system monitors the packet exchanges between internal systems and the Internet, filters packets according to the data security policy defined by each company, or discretionarily deletes important data included in packets in order to prevent leakage of corporate information. However, the problem arises that the system may monitor employees' personal information, thus allowing their privacy to be violated. Therefore, it is necessary to find not only a solution for detecting leakage of significant information, but also a way to minimize the leakage of internal users' personal information. In this paper, we propose two models for representing the level of personal information disclosure during data leakage detection. One model measures only the disclosure frequencies of keywords that are defined as personal data. These frequencies are used to indicate the privacy violation level. The other model represents the context of privacy violation using a private data matrix. Each row of the matrix represents the disclosure counts for personal data keywords in a given time period, and each column represents the disclosure count of a certain keyword during the entire observation interval. Using the suggested matrix model, we can represent an abstracted context of the privacy violation situation. Experiments on the privacy violation situation to demonstrate the usability of the suggested models are also presented.

Reversible Data Hiding in Permutation-based Encrypted Images with Strong Privacy

  • Shiu, Chih-Wei;Chen, Yu-Chi;Hong, Wien
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.1020-1042
    • /
    • 2019
  • Reversible data hiding in encrypted images (RDHEI) provides some real-time cloud applications; i.e. the cloud, acting as a data-hider, automatically embeds timestamp in the encrypted image uploaded by a content owner. Many existing methods of RDHEI only satisfy user privacy in which the data-hider does not know the original image, but leaks owner privacy in which the receiver can obtains the original image by decryption and extraction. In the literature, the method of Zhang et al. is the one providing weak content-owner privacy in which the content-owner and data-hider have to share a data-hiding key. In this paper, we take care of the stronger notion, called strong content-owner privacy, and achieve it by presenting a new reversible data hiding in encrypted images. In the proposed method, image decryption and message extraction are separately controlled by different types of keys, and thus such functionalities are decoupled to solve the privacy problem. At the technique level, the original image is segmented along a Hilbert filling curve. To keep image privacy, segments are transformed into an encrypted image by using random permutation. The encrypted image does not reveal significant information about the original one. Data embedment can be realized by using pixel histogram-style hiding, since this property, can be preserved before or after encryption. The proposed method is a modular method to compile some specific reversible data hiding to those in encrypted image with content owner privacy. Finally, our experimental results show that the image quality is 50.85dB when the averaged payload is 0.12bpp.

A Privacy-Preserving Health Data Aggregation Scheme

  • Liu, Yining;Liu, Gao;Cheng, Chi;Xia, Zhe;Shen, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3852-3864
    • /
    • 2016
  • Patients' health data is very sensitive and the access to individual's health data should be strictly restricted. However, many data consumers may need to use the aggregated health data. For example, the insurance companies needs to use this data to setup the premium level for health insurances. Therefore, privacy-preserving data aggregation solutions for health data have both theoretical importance and application potentials. In this paper, we propose a privacy-preserving health data aggregation scheme using differential privacy. In our scheme, patients' health data are aggregated by the local healthcare center before it is used by data comsumers, and this prevents individual's data from being leaked. Moreover, compared with the existing schemes in the literature, our work enjoys two additional benefits: 1) it not only resists many well known attacks in the open wireless networks, but also achieves the resilience against the human-factor-aware differential aggregation attack; 2) no trusted third party is employed in our proposed scheme, hence it achieves the robustness property and it does not suffer the single point failure problem.

Suggestions for Applications of Anonymous Data under the Revised Data Privacy Acts (데이터 3법 시대의 익명화된 데이터 활용에 대한 제언)

  • Chun, Ji Young;Noh, Geontae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.3
    • /
    • pp.503-512
    • /
    • 2020
  • The revisions to data privacy acts allows the disclosure of data after anonymizing personal information. Such anonymized data is expected to be useful in research and services, but there are high concerns about privacy breaches such as re-identifying of the individuals from the anonymized data. In this paper, we showed that identifying individuals from public data is not very difficult, and also raises questions about the reliability of the public data. We suggest that users understand the trade-offs between data disclosure and privacy protection so that they can use data securely under the revised data privacy acts.

Big Data Security and Privacy: A Taxonomy with Some HPC and Blockchain Perspectives

  • Alsulbi, Khalil;Khemakhem, Maher;Basuhail, Abdullah;Eassa, Fathy;Jambi, Kamal Mansur;Almarhabi, Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.43-55
    • /
    • 2021
  • The amount of Big Data generated from multiple sources is continuously increasing. Traditional storage methods lack the capacity for such massive amounts of data. Consequently, most organizations have shifted to the use of cloud storage as an alternative option to store Big Data. Despite the significant developments in cloud storage, it still faces many challenges, such as privacy and security concerns. This paper discusses Big Data, its challenges, and different classifications of security and privacy challenges. Furthermore, it proposes a new classification of Big Data security and privacy challenges and offers some perspectives to provide solutions to these challenges.

Privacy Assurance and Consumer Behaviors in e-Business Environments (e-비즈니스 환경에서 기업의 개인정보보호 활동이 소비자 행위에 미치는 영향)

  • Park, JaeYoung;Jung, Woo-Jin;Lee, SangKeun;Kim, Beomsoo
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.4
    • /
    • pp.1-17
    • /
    • 2018
  • Recently, most online firms are trying to provide personalized services based on customer's data. However, customers are reluctant to give their information to online firm because of concerns about data breach. Online firms are seeking to increase their trust by ensuring the protection of personal information for customers through privacy seal (e.g. e-privacy) or data breach insurance. This research examines the effects of privacy assurance(i.e. privacy seal, data breach insurance) on consumer behavior in online environment. An experiment based on the hypothetical scenario was conducted using a between-subjects 2 (type of privacy assurance) + 1 (control) design. We found that both privacy seal and data breach insurance increased perceived privacy trust. In addition, privacy seal has a positive effect on the intention to provide personal information through perceived privacy trust. Finally, in the case of the group with a high (low) disposition to trust, higher perceived privacy trust is formed through privacy seal (data breach insurance). Theoretical and practical implications are discussed.

Privacy measurement method using a graph structure on online social networks

  • Li, XueFeng;Zhao, Chensu;Tian, Keke
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.812-824
    • /
    • 2021
  • Recently, with an increase in Internet usage, users of online social networks (OSNs) have increased. Consequently, privacy leakage has become more serious. However, few studies have investigated the difference between privacy and actual behaviors. In particular, users' desire to change their privacy status is not supported by their privacy literacy. Presenting an accurate measurement of users' privacy status can cultivate the privacy literacy of users. However, the highly interactive nature of interpersonal communication on OSNs has promoted privacy to be viewed as a communal issue. As a large number of redundant users on social networks are unrelated to the user's privacy, existing algorithms are no longer applicable. To solve this problem, we propose a structural similarity measurement method suitable for the characteristics of social networks. The proposed method excludes redundant users and combines the attribute information to measure the privacy status of users. Using this approach, users can intuitively recognize their privacy status on OSNs. Experiments using real data show that our method can effectively and accurately help users improve their privacy disclosures.

Privacy-Preserving k-Bits Inner Product Protocol (프라이버시 보장 k-비트 내적연산 기법)

  • Lee, Sang Hoon;Kim, Kee Sung;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.1
    • /
    • pp.33-43
    • /
    • 2013
  • The research on data mining that can manage a large amount of information efficiently has grown with the drastic increment of information. Privacy-preserving data mining can protect the privacy of data owners. There are several privacy-preserving association rule, clustering and classification protocols. A privacy-preserving association rule protocol is used to find association rules among data, which is often used for marketing. In this paper, we propose a privacy-preserving k-bits inner product protocol based on Shamir's secret sharing.

A Study on the Privacy Concern of e-commerce Users: Focused on Information Boundary Theory (전자상거래 이용자의 프라이버시 염려에 관한 연구 : 정보경계이론을 중심으로)

  • Kim, Jong-Ki;Oh, Da-Woon
    • The Journal of Information Systems
    • /
    • v.26 no.2
    • /
    • pp.43-62
    • /
    • 2017
  • Purpose This study provided empirical support for the model that explain the formation of privacy concerns in the perspective of Information Boundary Theory. This study investigated an integrated model suggesting that privacy concerns are formed by the individual's disposition to value privacy, privacy awareness, awareness of privacy policy, and government legislation. The Information Boundary Theory suggests that the boundaries of information space dependends on the individual's personal characteristics and environmental factors of e-commerce. When receiving a request for personal information from e-commerce websites, an individual assesses the risk depending on the risk-control assessment, the perception of intrusion give rise to privacy concerns. Design/methodology/approach This study empirically tested the hypotheses with the data collected in a survey that included the items measuring the constructs in the model. The survey was aimed at university students. and a causal modeling statistical technique(PLS) is used for data analysis in this research. Findings The results of the survey indicated significant relationships among environmental factors of e-commerce websites, individual's personal privacy characteristics and privacy concerns. Both individual's awareness of institutional privacy assurance on e-commerce and the privacy characteristics affect the risk-control assessment towards information disclosure, which becomes an essential components of privacy concerns.