• Title/Summary/Keyword: Data Privacy

Search Result 1,286, Processing Time 0.031 seconds

The Legal Justice of Conferring Criminal Negligence on Chief Privacy Officers(CPO) (개인정보관리자의 책임과 벌칙의 형평성)

  • Kim, Beom-Soo
    • Journal of Information Technology Services
    • /
    • v.10 no.4
    • /
    • pp.21-32
    • /
    • 2011
  • The recently revised "Telecommunications Business Promotion and Personal Data Protection Act" is an important legal milestone in promoting the Korean telecommunications infrastructure and industry as well as protecting individuals' personal data and individuals' rights to privacy. Special characteristics of information security and privacy protection services including public goods' feature, adaptiveness, relativity, multi-dimensionality, and incompleteness, are reviewed. The responsibility of chief security/privacy officers in the IT industry, and the fairness and effectiveness of the criminal negligence in the Telecommunications Act are analyzed. An assessment of the rationale behind the act as well as a survey of related laws and cases in different countries, offers the following recommendations : i) revise the act and develop new systems for data protection, ii) grant a stay of execution or reduce the sentence given extenuating circumstances, or iii) use technical and managerial measures in data protection for exemption from criminal negligence.

Spatial Statistic Data Release Based on Differential Privacy

  • Cai, Sujin;Lyu, Xin;Ban, Duohan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.10
    • /
    • pp.5244-5259
    • /
    • 2019
  • With the continuous development of LBS (Location Based Service) applications, privacy protection has become an urgent problem to be solved. Differential privacy technology is based on strict mathematical theory that provides strong privacy guarantees where it supposes that the attacker has the worst-case background knowledge and that knowledge has been applied to different research directions such as data query, release, and mining. The difficulty of this research is how to ensure data availability while protecting privacy. Spatial multidimensional data are usually released by partitioning the domain into disjointed subsets, then generating a hierarchical index. The traditional data-dependent partition methods need to allocate a part of the privacy budgets for the partitioning process and split the budget among all the steps, which is inefficient. To address such issues, a novel two-step partition algorithm is proposed. First, we partition the original dataset into fixed grids, inject noise and synthesize a dataset according to the noisy count. Second, we perform IH-Tree (Improved H-Tree) partition on the synthetic dataset and use the resulting partition keys to split the original dataset. The algorithm can save the privacy budget allocated to the partitioning process and obtain a more accurate release. The algorithm has been tested on three real-world datasets and compares the accuracy with the state-of-the-art algorithms. The experimental results show that the relative errors of the range query are considerably reduced, especially on the large scale dataset.

Privacy Preserving Data Mining Methods and Metrics Analysis (프라이버시 보존형 데이터 마이닝 방법 및 척도 분석)

  • Hong, Eun-Ju;Hong, Do-won;Seo, Chang-Ho
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.445-452
    • /
    • 2018
  • In a world where everything in life is being digitized, the amount of data is increasing exponentially. These data are processed into new data through collection and analysis. New data is used for a variety of purposes in hospitals, finance, and businesses. However, since existing data contains sensitive information of individuals, there is a fear of personal privacy exposure during collection and analysis. As a solution, there is privacy-preserving data mining (PPDM) technology. PPDM is a method of extracting useful information from data while preserving privacy. In this paper, we investigate PPDM and analyze various measures for evaluating the privacy and utility of data.

A Solution to Privacy Preservation in Publishing Human Trajectories

  • Li, Xianming;Sun, Guangzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3328-3349
    • /
    • 2020
  • With rapid development of ubiquitous computing and location-based services (LBSs), human trajectory data and associated activities are increasingly easily recorded. Inappropriately publishing trajectory data may leak users' privacy. Therefore, we study publishing trajectory data while preserving privacy, denoted privacy-preserving activity trajectories publishing (PPATP). We propose S-PPATP to solve this problem. S-PPATP comprises three steps: modeling, algorithm design and algorithm adjustment. During modeling, two user models describe users' behaviors: one based on a Markov chain and the other based on the hidden Markov model. We assume a potential adversary who intends to infer users' privacy, defined as a set of sensitive information. An adversary model is then proposed to define the adversary's background knowledge and inference method. Additionally, privacy requirements and a data quality metric are defined for assessment. During algorithm design, we propose two publishing algorithms corresponding to the user models and prove that both algorithms satisfy the privacy requirement. Then, we perform a comparative analysis on utility, efficiency and speedup techniques. Finally, we evaluate our algorithms through experiments on several datasets. The experiment results verify that our proposed algorithms preserve users' privay. We also test utility and discuss the privacy-utility tradeoff that real-world data publishers may face.

PAPG: Private Aggregation Scheme based on Privacy-preserving Gene in Wireless Sensor Networks

  • Zeng, Weini;Chen, Peng;Chen, Hairong;He, Shiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4442-4466
    • /
    • 2016
  • This paper proposes a privacy-preserving aggregation scheme based on the designed P-Gene (PAPG) for sensor networks. The P-Gene is constructed using the designed erasable data-hiding technique. In this P-Gene, each sensory data item may be hidden by the collecting sensor node, thereby protecting the privacy of this data item. Thereafter, the hidden data can be directly reported to the cluster head that aggregates the data. The aggregation result can then be recovered from the hidden data in the cluster head. The designed P-Genes can protect the privacy of each data item without additional data exchange or encryption. Given the flexible generation of the P-Genes, the proposed PAPG scheme adapts to dynamically changing reporting nodes. Apart from its favorable resistance to data loss, the extensive analyses and simulations demonstrate how the PAPG scheme efficiently preserves privacy while consuming less communication and computational overheads.

Standard Implementation for Privacy Framework and Privacy Reference Architecture for Protecting Personally Identifiable Information

  • Shin, Yong-Nyuo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.197-203
    • /
    • 2011
  • Personal Identifiable Information (PII) is considered information that identifies or can be used to identify, contact, or locate a person to whom such information pertains or that is or might be linked to a natural person directly or indirectly. In order to recognize such data processed within information and communication technologies such as PII, it should be determined at which stage the information identifies, or can be associated with, an individual. For this, there has been ongoing research for privacy protection mechanism to protect PII, which now becomes one of hot issues in the International Standard as privacy framework and privacy reference architecture. Data processing flow models should be developed as an integral component of privacy risk assessments. Such diagrams are also the basis for categorizing PII. The data processing flow may not only show areas where the PII has a certain level of sensitivity or importance and, as a consequence, requires the implementation of stronger safeguarding measures. This paper propose a standard format for satisfying the ISO/IEC 29100 "Privacy Framework" and shows an implementation example for privacy reference architecture implementing privacy controls for the processing of PII in information and communication technology.

Examining Factors that Determine the Use of Social Media Privacy Settings: Focused on the Mediating Effect of Implementation Intention to Use Privacy Settings

  • Jongki Kim;Jianbo Wang
    • Asia pacific journal of information systems
    • /
    • v.30 no.4
    • /
    • pp.919-945
    • /
    • 2020
  • Social media platforms such as Instagram and Facebook lead to potential security risks, which consequently raise public concerns about privacy. However, most people rarely make active efforts to protect their personal data, even though they have shown increasing concerns about privacy. Therefore, this study examines the factors that determine social media users' behavior of using privacy settings and testifies the existence of privacy paradox in such a context. In addition, it investigates the mediating effects of implementation intentions on the relationship between intentions and behaviors. In the study, we collected data through questionnaires, and the respondents were undergraduate and graduate students in South Korea. After a pilot test (n = 92) and a set of face-to-face interviews, 266 usable responses were retrieved for data analysis finally. The results confirmed the existence of the privacy paradox regarding the use of social media privacy settings. And the implication intention did positively mediate the relationship between intention and behavior in the context of social media privacy settings. To the best of our knowledge, our study is the first in the information privacy literature to introduce the notion of implementation intention which is a much more powerful explanation and prediction of actual behavior than the (behavioral) intention.

Practical Privacy-Preserving DBSCAN Clustering Over Horizontally Partitioned Data (다자간 환경에서 프라이버시를 보호하는 효율적인 DBSCAN 군집화 기법)

  • Kim, Gi-Sung;Jeong, Ik-Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.3
    • /
    • pp.105-111
    • /
    • 2010
  • We propose a practical privacy-preserving clustering protocol over horizontally partitioned data. We extend the DBSCAN clustering algorithm into a distributed protocol in which data providers mix real data with fake data to provide privacy. Our privacy-preserving clustering protocol is very efficient whereas the previous privacy-preserving protocols in the distributed environments are not practical to be used in real applications. The efficiency of our privacy-preserving clustering protocol over horizontally partitioned data is comparable with those of privacy-preserving clustering protocols in the non-distributed environments.

Predicting Information Self-Disclosure on Facebook: The Interplay Between Concern for Privacy and Need for Uniqueness

  • Kim, Yeuseung
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.74-81
    • /
    • 2019
  • This study examined the overall relationship between information privacy concern, need for uniqueness (NFU), and disclosure behavior to explain the personal factors that drive data-sharing on Facebook. The results of an online survey conducted with 222 Facebook users show that among diverse data that social media users disclose online, four distinct factors are identified: basic personal data, private data, personal opinions, and personal photos. In general, there is a negative relationship between privacy concern and a positive relationship between the NFU and the willingness to self-disclose information. Overall, the NFU was a better predictor of willingness to disclose information than privacy concern, gender, or age. While privacy concern has been identified as an influential factor when users evaluate social networking sites, the findings of this study contribute to the literature by demonstrating that an individual's need to manifest individualization on social media overrides privacy concerns.

Deriving ratings from a private P2P collaborative scheme

  • Okkalioglu, Murat;Kaleli, Cihan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4463-4483
    • /
    • 2019
  • Privacy-preserving collaborative filtering schemes take privacy concerns into its primary consideration without neglecting the prediction accuracy. Different schemes are proposed that are built upon different data partitioning scenarios such as a central server, two-, multi-party or peer-to-peer network. These data partitioning scenarios have been investigated in terms of claimed privacy promises, recently. However, to the best of our knowledge, any peer-to-peer privacy-preserving scheme lacks such study that scrutinizes privacy promises. In this paper, we apply three different attack techniques by utilizing auxiliary information to derive private ratings of peers and conduct experiments by varying privacy protection parameters to evaluate to what extent peers' data can be reconstructed.