• Title/Summary/Keyword: Privacy Data

Search Result 1,309, Processing Time 0.029 seconds

Mobile Payment Use in Light of Privacy Protection and Provider's Market Control

  • Mohammad Bakhsh;Hyein Jeong;Lingyu Zhao;One-Ki Daniel Lee
    • Asia pacific journal of information systems
    • /
    • v.31 no.3
    • /
    • pp.257-276
    • /
    • 2021
  • This study investigates the factors that facilitate or hinder people to use mobile payment, especially drawing upon the theoretical perspectives on individual's privacy protection motivation and perceived market condition. Survey data (n = 200) were collected through a web-based platform and used to test a theoretical model. The results show that one's privacy protection power is formed by various individual and technological factors (i.e., perceived data exposure, self-efficacy, and response efficacy), and in turn it determines his/her intention to use mobile payment. Moreover, the relationship between privacy protection power and mobile payment use is conditional on the perceived market control by the service provider - with a perception of the high level of provider's market control, one uses mobile payment regardless of his/her privacy protection power, while under the low level of provider's market control, the decision depends on the degree of privacy protection power. The findings would help our understanding of why some people are more susceptible to mobile payment and others are not.

Privacy Model Recommendation System Based on Data Feature Analysis

  • Seung Hwan Ryu;Yongki Hong;Gihyuk Ko;Heedong Yang;Jong Wan Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.81-92
    • /
    • 2023
  • A privacy model is a technique that quantitatively restricts the possibility and degree of privacy breaches through privacy attacks. Representative models include k-anonymity, l-diversity, t-closeness, and differential privacy. While many privacy models have been studied, research on selecting the most suitable model for a given dataset has been relatively limited. In this study, we develop a system for recommending the suitable privacy model to prevent privacy breaches. To achieve this, we analyze the data features that need to be considered when selecting a model, such as data type, distribution, frequency, and range. Based on privacy model background knowledge that includes information about the relationships between data features and models, we recommend the most appropriate model. Finally, we validate the feasibility and usefulness by implementing a recommendation prototype system.

Privacy Level Indicating Data Leakage Prevention System

  • Kim, Jinhyung;Park, Choonsik;Hwang, Jun;Kim, Hyung-Jong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.558-575
    • /
    • 2013
  • The purpose of a data leakage prevention system is to protect corporate information assets. The system monitors the packet exchanges between internal systems and the Internet, filters packets according to the data security policy defined by each company, or discretionarily deletes important data included in packets in order to prevent leakage of corporate information. However, the problem arises that the system may monitor employees' personal information, thus allowing their privacy to be violated. Therefore, it is necessary to find not only a solution for detecting leakage of significant information, but also a way to minimize the leakage of internal users' personal information. In this paper, we propose two models for representing the level of personal information disclosure during data leakage detection. One model measures only the disclosure frequencies of keywords that are defined as personal data. These frequencies are used to indicate the privacy violation level. The other model represents the context of privacy violation using a private data matrix. Each row of the matrix represents the disclosure counts for personal data keywords in a given time period, and each column represents the disclosure count of a certain keyword during the entire observation interval. Using the suggested matrix model, we can represent an abstracted context of the privacy violation situation. Experiments on the privacy violation situation to demonstrate the usability of the suggested models are also presented.

Reversible Data Hiding in Permutation-based Encrypted Images with Strong Privacy

  • Shiu, Chih-Wei;Chen, Yu-Chi;Hong, Wien
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.1020-1042
    • /
    • 2019
  • Reversible data hiding in encrypted images (RDHEI) provides some real-time cloud applications; i.e. the cloud, acting as a data-hider, automatically embeds timestamp in the encrypted image uploaded by a content owner. Many existing methods of RDHEI only satisfy user privacy in which the data-hider does not know the original image, but leaks owner privacy in which the receiver can obtains the original image by decryption and extraction. In the literature, the method of Zhang et al. is the one providing weak content-owner privacy in which the content-owner and data-hider have to share a data-hiding key. In this paper, we take care of the stronger notion, called strong content-owner privacy, and achieve it by presenting a new reversible data hiding in encrypted images. In the proposed method, image decryption and message extraction are separately controlled by different types of keys, and thus such functionalities are decoupled to solve the privacy problem. At the technique level, the original image is segmented along a Hilbert filling curve. To keep image privacy, segments are transformed into an encrypted image by using random permutation. The encrypted image does not reveal significant information about the original one. Data embedment can be realized by using pixel histogram-style hiding, since this property, can be preserved before or after encryption. The proposed method is a modular method to compile some specific reversible data hiding to those in encrypted image with content owner privacy. Finally, our experimental results show that the image quality is 50.85dB when the averaged payload is 0.12bpp.

Clustering-Based Federated Learning for Enhancing Data Privacy in Internet of Vehicles

  • Zilong Jin;Jin Wang;Lejun Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1462-1477
    • /
    • 2024
  • With the evolving complexity of connected vehicle features, the volume and diversity of data generated during driving continue to escalate. Enabling data sharing among interconnected vehicles holds promise for improving users' driving experiences and alleviating traffic congestion. Yet, the unintentional disclosure of users' private information through data sharing poses a risk, potentially compromising the interests of vehicle users and, in certain cases, endangering driving safety. Federated learning (FL) is a newly emerged distributed machine learning paradigm, which is expected to play a prominent role for privacy-preserving learning in autonomous vehicles. While FL holds significant potential to enhance the architecture of the Internet of Vehicles (IoV), the dynamic mobility of vehicles poses a considerable challenge to integrating FL with vehicular networks. In this paper, a novel clustered FL framework is proposed which is efficient for reducing communication and protecting data privacy. By assessing the similarity among feature vectors, vehicles are categorized into distinct clusters. An optimal vehicle is elected as the cluster head, which enhances the efficiency of personalized data processing and model training while reducing communication overhead. Simultaneously, the Local Differential Privacy (LDP) mechanism is incorporated during local training to safeguard vehicle privacy. The simulation results obtained from the 20newsgroups dataset and the MNIST dataset validate the effectiveness of the proposed scheme, indicating that the proposed scheme can ensure data privacy effectively while reducing communication overhead.

A Privacy-Preserving Health Data Aggregation Scheme

  • Liu, Yining;Liu, Gao;Cheng, Chi;Xia, Zhe;Shen, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3852-3864
    • /
    • 2016
  • Patients' health data is very sensitive and the access to individual's health data should be strictly restricted. However, many data consumers may need to use the aggregated health data. For example, the insurance companies needs to use this data to setup the premium level for health insurances. Therefore, privacy-preserving data aggregation solutions for health data have both theoretical importance and application potentials. In this paper, we propose a privacy-preserving health data aggregation scheme using differential privacy. In our scheme, patients' health data are aggregated by the local healthcare center before it is used by data comsumers, and this prevents individual's data from being leaked. Moreover, compared with the existing schemes in the literature, our work enjoys two additional benefits: 1) it not only resists many well known attacks in the open wireless networks, but also achieves the resilience against the human-factor-aware differential aggregation attack; 2) no trusted third party is employed in our proposed scheme, hence it achieves the robustness property and it does not suffer the single point failure problem.

Suggestions for Applications of Anonymous Data under the Revised Data Privacy Acts (데이터 3법 시대의 익명화된 데이터 활용에 대한 제언)

  • Chun, Ji Young;Noh, Geontae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.3
    • /
    • pp.503-512
    • /
    • 2020
  • The revisions to data privacy acts allows the disclosure of data after anonymizing personal information. Such anonymized data is expected to be useful in research and services, but there are high concerns about privacy breaches such as re-identifying of the individuals from the anonymized data. In this paper, we showed that identifying individuals from public data is not very difficult, and also raises questions about the reliability of the public data. We suggest that users understand the trade-offs between data disclosure and privacy protection so that they can use data securely under the revised data privacy acts.

Big Data Security and Privacy: A Taxonomy with Some HPC and Blockchain Perspectives

  • Alsulbi, Khalil;Khemakhem, Maher;Basuhail, Abdullah;Eassa, Fathy;Jambi, Kamal Mansur;Almarhabi, Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.43-55
    • /
    • 2021
  • The amount of Big Data generated from multiple sources is continuously increasing. Traditional storage methods lack the capacity for such massive amounts of data. Consequently, most organizations have shifted to the use of cloud storage as an alternative option to store Big Data. Despite the significant developments in cloud storage, it still faces many challenges, such as privacy and security concerns. This paper discusses Big Data, its challenges, and different classifications of security and privacy challenges. Furthermore, it proposes a new classification of Big Data security and privacy challenges and offers some perspectives to provide solutions to these challenges.

Privacy Assurance and Consumer Behaviors in e-Business Environments (e-비즈니스 환경에서 기업의 개인정보보호 활동이 소비자 행위에 미치는 영향)

  • Park, JaeYoung;Jung, Woo-Jin;Lee, SangKeun;Kim, Beomsoo
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.4
    • /
    • pp.1-17
    • /
    • 2018
  • Recently, most online firms are trying to provide personalized services based on customer's data. However, customers are reluctant to give their information to online firm because of concerns about data breach. Online firms are seeking to increase their trust by ensuring the protection of personal information for customers through privacy seal (e.g. e-privacy) or data breach insurance. This research examines the effects of privacy assurance(i.e. privacy seal, data breach insurance) on consumer behavior in online environment. An experiment based on the hypothetical scenario was conducted using a between-subjects 2 (type of privacy assurance) + 1 (control) design. We found that both privacy seal and data breach insurance increased perceived privacy trust. In addition, privacy seal has a positive effect on the intention to provide personal information through perceived privacy trust. Finally, in the case of the group with a high (low) disposition to trust, higher perceived privacy trust is formed through privacy seal (data breach insurance). Theoretical and practical implications are discussed.

Privacy measurement method using a graph structure on online social networks

  • Li, XueFeng;Zhao, Chensu;Tian, Keke
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.812-824
    • /
    • 2021
  • Recently, with an increase in Internet usage, users of online social networks (OSNs) have increased. Consequently, privacy leakage has become more serious. However, few studies have investigated the difference between privacy and actual behaviors. In particular, users' desire to change their privacy status is not supported by their privacy literacy. Presenting an accurate measurement of users' privacy status can cultivate the privacy literacy of users. However, the highly interactive nature of interpersonal communication on OSNs has promoted privacy to be viewed as a communal issue. As a large number of redundant users on social networks are unrelated to the user's privacy, existing algorithms are no longer applicable. To solve this problem, we propose a structural similarity measurement method suitable for the characteristics of social networks. The proposed method excludes redundant users and combines the attribute information to measure the privacy status of users. Using this approach, users can intuitively recognize their privacy status on OSNs. Experiments using real data show that our method can effectively and accurately help users improve their privacy disclosures.