• Title/Summary/Keyword: Privacy Data

Search Result 1,309, Processing Time 0.025 seconds

A Model for Privacy Preserving Publication of Social Network Data (소셜 네트워크 데이터의 프라이버시 보호 배포를 위한 모델)

  • Sung, Min-Kyung;Chung, Yon-Dohn
    • Journal of KIISE:Databases
    • /
    • v.37 no.4
    • /
    • pp.209-219
    • /
    • 2010
  • Online social network services that are rapidly growing recently store tremendous data and analyze them for many research areas. To enhance the effectiveness of information, companies or public institutions publish their data and utilize the published data for many purposes. However, a social network containing information of individuals may cause a privacy disclosure problem. Eliminating identifiers such as names is not effective for the privacy protection, since private information can be inferred through the structural information of a social network. In this paper, we consider a new complex attack type that uses both the content and structure information, and propose a model, $\ell$-degree diversity, for the privacy preserving publication of the social network data against such attacks. $\ell$-degree diversity is the first model for applying $\ell$-diversity to social network data publication and through the experiments it shows high data preservation rate.

A Privacy Protection Method in Social Networks Considering Structure and Content Information (소셜 네트워크에서 구조정보와 내용정보를 고려한 프라이버시 보호 기법)

  • Sung, Minh-Kyoung;Lee, Ki-Yong;Chung, Yon-Dohn
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.119-128
    • /
    • 2010
  • Recently, social network services are rapidly growing and it is estimated that this trend will continue in the future. Social network data can be published for various purposes such as statistical analysis and population studies. When data publication, however, it may disclose the personal privacy of some people, since it can be combined with external information. Therefore, a social network data holder has to remove the identifiers of persons and modify data which have the potential to disclose the privacy of the persons by combining it with external information. The utility of data is maximized when the modification of data is minimized. In this paper, we propose a privacy protection method for social network data that considers both structural and content information. Previous work did not consider content information in the social network or distorted too much structural information. We also verify the effectiveness and applicability of the proposed method under various experimental conditions.

A Comparative Study of the Effects of Consumer Innovativeness, Self-esteem, and Need for Cognition on Online Activity before and after COVID-19

  • Myung Gwan Lee;Sang Hyeok Park;Seung Hee Oh
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.5
    • /
    • pp.121-139
    • /
    • 2023
  • This study tried to identify factors affecting online activity before and after the COVID-19 pandemic. To this end, the effects of consumer innovativeness, self-esteem, and need for cognition on the activity of online media such as Internet and social media were investigated, and whether privacy concerns had a moderating effect. For this study, survey data from 2019(before the outbreak of COVID-19) to 2021(after the outbreak of COVID-19) of the 'Korea Media Panel Survey' surveyed by the Korea Information Society Development Institute was used for analysis. The research results that affect Internet activity are as follows. Before the outback of COVID-19, it was found that hedonic innovativeness and social innovativeness had a positive effect and cognitive innovativeness had a negative effect on increasing Internet activity. There was no moderating effect on privacy concerns. The period after the outbreak of COVID-19, need for cognition was found to have a positive effect on increasing social media activity. In addition, the moderating effect of privacy concerns was found in the relationship between need for cognition and Internet activity. There was no privacy concern effect before the outbreak of COVID-19, and the privacy concern effect appeared on functional innovation and need for cognition after the outbreak of COVID-19. This study aims to present various implications for companies to understand the characteristics of online consumers using the Internet and social media after the pandemic.

An Empirical Research on Information Privacy Concern in the IoT Era (사물인터넷 시대의 정보 프라이버시 염려에 대한 실증 연구)

  • Park, Cheon-Woong;Kim, Jun-Woo
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.65-72
    • /
    • 2016
  • This study built the theoretical frameworks for empirical analysis based on the analysis of the relationship among the concepts of risk of information privacy, the experience of information privacy, the policy of information privacy and information control via the provision intention studies. Also, in order to analyze the relationship among the factors such as the risk of information privacy, intention to offer the personal information, this study investigated the concepts of information privacy and studies related with the privacy, established a research model about the information privacy. Followings are the results of this study: First, the information privacy risk, information privacy experience, information privacy policy, and information control have positive effects upon the information privacy concern. Second, the information privacy concern has the negative effects upon the provision intention of personal information.

Analysis of privacy issues and countermeasures in neural network learning (신경망 학습에서 프라이버시 이슈 및 대응방법 분석)

  • Hong, Eun-Ju;Lee, Su-Jin;Hong, Do-won;Seo, Chang-Ho
    • Journal of Digital Convergence
    • /
    • v.17 no.7
    • /
    • pp.285-292
    • /
    • 2019
  • With the popularization of PC, SNS and IoT, a lot of data is generated and the amount is increasing exponentially. Artificial neural network learning is a topic that attracts attention in many fields in recent years by using huge amounts of data. Artificial neural network learning has shown tremendous potential in speech recognition and image recognition, and is widely applied to a variety of complex areas such as medical diagnosis, artificial intelligence games, and face recognition. The results of artificial neural networks are accurate enough to surpass real human beings. Despite these many advantages, privacy problems still exist in artificial neural network learning. Learning data for artificial neural network learning includes various information including personal sensitive information, so that privacy can be exposed due to malicious attackers. There is a privacy risk that occurs when an attacker interferes with learning and degrades learning or attacks a model that has completed learning. In this paper, we analyze the attack method of the recently proposed neural network model and its privacy protection method.

Privacy Control Using GRBAC In An Extended Role-Based Access Control Model (확장된 역할기반 접근제어 모델에서 GRBAC을 이용한 프라이버시 제어)

  • Park Chong hwa;Kim Ji hong;Kim Dong kyoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.167-175
    • /
    • 2005
  • Privacy enforcement has been one of the most important problems in IT area. Privacy protection can be achieved by enforcing privacy policies within an organization's online and offline data processing systems. Traditional security models are more or less inappropriate for enforcing basic privacy requirements, such as purpose binding. This paper proposes a new approach in which a privacy control model is derived from integration of an existing security model. To this, we use an extended role-based access control model for existing security mechanism, in which this model provides context-based access control by combining RBAC and domain-type enforcement. For implementation of privacy control model we use GRBAC(Generalized Role-Based Access Control), which is expressive enough to deal with privacy preference. And small hospital model is considered for application of this model.

A Study on Anesthesia and Operating Room (OR) Nurses' Perception and Performance of Privacy Protection Behavior for Patients Undergoing General Anesthesia Surgery and Patients' Satisfaction with Operating Room Hospitalization Experience (프라이버시 보호 행동에 대한 전신마취 수술환자와 마취⋅수술실 간호사의 인식, 실천 정도 및 전신마취 수술환자의 입원경험 만족도 연구)

  • Park, Suk Jong;Ham, Sang Hee;Baek, Gum Sun;An, Soomin
    • Journal of East-West Nursing Research
    • /
    • v.29 no.1
    • /
    • pp.24-32
    • /
    • 2023
  • Purpose: This study aims to examine level of perception and performance of privacy protection behavior of anesthesia and operating room (OR) nurses for patients who underwent general anesthesia surgery. Methods: Data collection was conducted from August 2020 to January 2021 for a total of 101 participants, consisting of 49 patients and 52 nurses. Independent t-test and Pearson's correlation were conducted using SPSS 21. Results: Anesthesia and OR nurses showed the highest score in patient privacy, followed by patient information management, body privacy, and the lowest score in communication. There was a significant difference between the patient information and the communication. Conclusion: Anesthesia and OR nurses had the highest level of perception and performance of patient privacy protection behavior for body privacy, and the lowest for communication. In addition, there was a significant difference in patient information management and communication. In order to protect the privacy of patients undergoing general anesthesia surgery, efforts are needed to learn standardized nursing knowledge, attitudes, and practice.

Privacy-Preserving Clustering on Time-Series Data Using Fourier Magnitudes (시계열 데이타 클러스터링에서 푸리에 진폭 기반의 프라이버시 보호)

  • Kim, Hea-Suk;Moon, Yang-Sae
    • Journal of KIISE:Databases
    • /
    • v.35 no.6
    • /
    • pp.481-494
    • /
    • 2008
  • In this paper we propose Fourier magnitudes based privacy preserving clustering on time-series data. The previous privacy-preserving method, called DFT coefficient method, has a critical problem in privacy-preservation itself since the original time-series data may be reconstructed from privacy-preserved data. In contrast, the proposed DFT magnitude method has an excellent characteristic that reconstructing the original data is almost impossible since it uses only DFT magnitudes except DFT phases. In this paper, we first explain why the reconstruction is easy in the DFT coefficient method, and why it is difficult in the DFT magnitude method. We then propose a notion of distance-order preservation which can be used both in estimating clustering accuracy and in selecting DFT magnitudes. Degree of distance-order preservation means how many time-series preserve their relative distance orders before and after privacy-preserving. Using this degree of distance-order preservation we present greedy strategies for selecting magnitudes in the DFT magnitude method. That is, those greedy strategies select DFT magnitudes to maximize the degree of distance-order preservation, and eventually we can achieve the relatively high clustering accuracy in the DFT magnitude method. Finally, we empirically show that the degree of distance-order preservation is an excellent measure that well reflects the clustering accuracy. In addition, experimental results show that our greedy strategies of the DFT magnitude method are comparable with the DFT coefficient method in the clustering accuracy. These results indicate that, compared with the DFT coefficient method, our DFT magnitude method provides the excellent degree of privacy-preservation as well as the comparable clustering accuracy.

Sharing and Privacy in PHRs: Efficient Policy Hiding and Update Attribute-based Encryption

  • Liu, Zhenhua;Ji, Jiaqi;Yin, Fangfang;Wang, Baocang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.323-342
    • /
    • 2021
  • Personal health records (PHRs) is an electronic medical system that enables patients to acquire, manage and share their health data. Nevertheless, data confidentiality and user privacy in PHRs have not been handled completely. As a fine-grained access control over health data, ciphertext-policy attribute-based encryption (CP-ABE) has an ability to guarantee data confidentiality. However, existing CP-ABE solutions for PHRs are facing some new challenges in access control, such as policy privacy disclosure and dynamic policy update. In terms of addressing these problems, we propose a privacy protection and dynamic share system (PPADS) based on CP-ABE for PHRs, which supports full policy hiding and flexible access control. In the system, attribute information of access policy is fully hidden by attribute bloom filter. Moreover, data user produces a transforming key for the PHRs Cloud to change access policy dynamically. Furthermore, relied on security analysis, PPADS is selectively secure under standard model. Finally, the performance comparisons and simulation results demonstrate that PPADS is suitable for PHRs.

Time Series Crime Prediction Using a Federated Machine Learning Model

  • Salam, Mustafa Abdul;Taha, Sanaa;Ramadan, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.119-130
    • /
    • 2022
  • Crime is a common social problem that affects the quality of life. As the number of crimes increases, it is necessary to build a model to predict the number of crimes that may occur in a given period, identify the characteristics of a person who may commit a particular crime, and identify places where a particular crime may occur. Data privacy is the main challenge that organizations face when building this type of predictive models. Federated learning (FL) is a promising approach that overcomes data security and privacy challenges, as it enables organizations to build a machine learning model based on distributed datasets without sharing raw data or violating data privacy. In this paper, a federated long short- term memory (LSTM) model is proposed and compared with a traditional LSTM model. Proposed model is developed using TensorFlow Federated (TFF) and the Keras API to predict the number of crimes. The proposed model is applied on the Boston crime dataset. The proposed model's parameters are fine tuned to obtain minimum loss and maximum accuracy. The proposed federated LSTM model is compared with the traditional LSTM model and found that the federated LSTM model achieved lower loss, better accuracy, and higher training time than the traditional LSTM model.