• Title/Summary/Keyword: Sensitive Data

Search Result 2,476, Processing Time 0.03 seconds

A Strategy Study on Sensitive Information Filtering for Personal Information Protect in Big Data Analyze

  • Koo, Gun-Seo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.101-108
    • /
    • 2017
  • The study proposed a system that filters the data that is entered when analyzing big data such as SNS and BLOG. Personal information includes impersonal personal information, but there is also personal information that distinguishes it from personal information, such as religious institution, personal feelings, thoughts, or beliefs. Define these personally identifiable information as sensitive information. In order to prevent this, Article 23 of the Privacy Act has clauses on the collection and utilization of the information. The proposed system structure is divided into two stages, including Big Data Processing Processes and Sensitive Information Filtering Processes, and Big Data processing is analyzed and applied in Big Data collection in four stages. Big Data Processing Processes include data collection and storage, vocabulary analysis and parsing and semantics. Sensitive Information Filtering Processes includes sensitive information questionnaires, establishing sensitive information DB, qualifying information, filtering sensitive information, and reliability analysis. As a result, the number of Big Data performed in the experiment was carried out at 84.13%, until 7553 of 8978 was produced to create the Ontology Generation. There is considerable significan ce to the point that Performing a sensitive information cut phase was carried out by 98%.

Locality-Sensitive Hashing Techniques for Nearest Neighbor Search

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.300-307
    • /
    • 2012
  • When the volume of data grows big, some simple tasks could become a significant concern. Nearest neighbor search is such a task which finds from a data set the k nearest data points to queries. Locality-sensitive hashing techniques have been developed for approximate but fast nearest neighbor search. This paper introduces the notion of locality-sensitive hashing and surveys the locality-sensitive hashing techniques. It categories them based on several criteria, presents their characteristics, and compares their performance.

Efficient Data Publishing Method for Protecting Sensitive Information by Data Inference (데이터 추론에 의한 민감한 정보를 보호하기 위한 효율적인 데이터 출판 방법)

  • Ko, Hye-Kyeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.9
    • /
    • pp.217-222
    • /
    • 2016
  • Recent research on integrated and peer-to-peer databases has produced new methods for handling various types of shared-group and process data. This paper with data publishing, where the publisher needs to specify certain sensitive information that should be protected. The proposed method cannot infer the user's sensitive information is leaked by XML constraints. In addition, the proposed secure framework uses encrypt to prevent the leakage of sensitive information from authorized users. In this framework, each node of sensitive data in an eXtensible Markup Language (XML) document is encrypted separately. All of the encrypted data are moved from their original document, and are bundled with an encrypted structure index. Our experiments show that the proposed framework prevents information being leaked via data inference.

Privacy-Preserving IoT Data Collection in Fog-Cloud Computing Environment

  • Lim, Jong-Hyun;Kim, Jong Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.9
    • /
    • pp.43-49
    • /
    • 2019
  • Today, with the development of the internet of things, wearable devices related to personal health care have become widespread. Various global information and communication technology companies are developing various wearable health devices, which can collect personal health information such as heart rate, steps, and calories, using sensors built into the device. However, since individual health data includes sensitive information, the collection of irrelevant health data can lead to personal privacy issue. Therefore, there is a growing need to develop technology for collecting sensitive health data from wearable health devices, while preserving privacy. In recent years, local differential privacy (LDP), which enables sensitive data collection while preserving privacy, has attracted much attention. In this paper, we develop a technology for collecting vast amount of health data from a smartwatch device, which is one of popular wearable health devices, using local difference privacy. Experiment results with real data show that the proposed method is able to effectively collect sensitive health data from smartwatch users, while preserving privacy.

Enhanced Locality Sensitive Clustering in High Dimensional Space

  • Chen, Gang;Gao, Hao-Lin;Li, Bi-Cheng;Hu, Guo-En
    • Transactions on Electrical and Electronic Materials
    • /
    • v.15 no.3
    • /
    • pp.125-129
    • /
    • 2014
  • A dataset can be clustered by merging the bucket indices that come from the random projection of locality sensitive hashing functions. It should be noted that for this to work the merging interval must be calculated first. To improve the feasibility of large scale data clustering in high dimensional space we propose an enhanced Locality Sensitive Hashing Clustering Method. Firstly, multiple hashing functions are generated. Secondly, data points are projected to bucket indices. Thirdly, bucket indices are clustered to get class labels. Experimental results showed that on synthetic datasets this method achieves high accuracy at much improved cluster speeds. These attributes make it well suited to clustering data in high dimensional space.

Secure Healthcare Management: Protecting Sensitive Information from Unauthorized Users

  • Ko, Hye-Kyeong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.82-89
    • /
    • 2021
  • Recently, applications are increasing the importance of security for published documents. This paper deals with data-publishing where the publishers must state sensitive information that they need to protect. If a document containing such sensitive information is accidentally posted, users can use common-sense reasoning to infer unauthorized information. In recent studied of peer-to-peer databases, studies on the security of data of various unique groups are conducted. In this paper, we propose a security framework that fundamentally blocks user inference about sensitive information that may be leaked by XML constraints and prevents sensitive information from leaking from general user. The proposed framework protects sensitive information disclosed through encryption technology. Moreover, the proposed framework is query view security without any three types of XML constraints. As a result of the experiment, the proposed framework has mathematically proved a way to prevent leakage of user information through data inference more than the existing method.

A Study on the Improvement of Image Classification Performance in the Defense Field through Cost-Sensitive Learning of Imbalanced Data (불균형데이터의 비용민감학습을 통한 국방분야 이미지 분류 성능 향상에 관한 연구)

  • Jeong, Miae;Ma, Jungmok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.3
    • /
    • pp.281-292
    • /
    • 2021
  • With the development of deep learning technology, researchers and technicians keep attempting to apply deep learning in various industrial and academic fields, including the defense. Most of these attempts assume that the data are balanced. In reality, since lots of the data are imbalanced, the classifier is not properly built and the model's performance can be low. Therefore, this study proposes cost-sensitive learning as a solution to the imbalance data problem of image classification in the defense field. In the proposed model, cost-sensitive learning is a method of giving a high weight on the cost function of a minority class. The results of cost-sensitive based model shows the test F1-score is higher when cost-sensitive learning is applied than general learning's through 160 experiments using submarine/non-submarine dataset and warship/non-warship dataset. Furthermore, statistical tests are conducted and the results are shown significantly.

Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.98-104
    • /
    • 2014
  • Locality-sensitive hashing techniques have been developed to efficiently handle nearest neighbor searches and similar pair identification problems for large volumes of high-dimensional data. This study proposes a locality-sensitive hashing method that can be applied to nearest neighbor search problems for data sets containing both numerical and categorical attributes. The proposed method makes use of dual hashing functions, where one function is dedicated to numerical attributes and the other to categorical attributes. The method consists of creating indexing structures for each of the dual hashing functions, gathering and combining the candidates sets, and thoroughly examining them to determine the nearest ones. The proposed method is examined for a few synthetic data sets, and results show that it improves performance in cases of large amounts of data with both numerical and categorical attributes.

Hiding Sensitive Frequent Itemsets by a Border-Based Approach

  • Sun, Xingzhi;Yu, Philip S.
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.1
    • /
    • pp.74-94
    • /
    • 2007
  • Nowadays, sharing data among organizations is often required during the business collaboration. Data mining technology has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that non-sensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. Unlike previous work, we consider the quality of the sanitized database especially on preserving the non-sensitive frequent itemsets. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach.

Learning fair prediction models with an imputed sensitive variable: Empirical studies

  • Kim, Yongdai;Jeong, Hwichang
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.251-261
    • /
    • 2022
  • As AI has a wide range of influence on human social life, issues of transparency and ethics of AI are emerging. In particular, it is widely known that due to the existence of historical bias in data against ethics or regulatory frameworks for fairness, trained AI models based on such biased data could also impose bias or unfairness against a certain sensitive group (e.g., non-white, women). Demographic disparities due to AI, which refer to socially unacceptable bias that an AI model favors certain groups (e.g., white, men) over other groups (e.g., black, women), have been observed frequently in many applications of AI and many studies have been done recently to develop AI algorithms which remove or alleviate such demographic disparities in trained AI models. In this paper, we consider a problem of using the information in the sensitive variable for fair prediction when using the sensitive variable as a part of input variables is prohibitive by laws or regulations to avoid unfairness. As a way of reflecting the information in the sensitive variable to prediction, we consider a two-stage procedure. First, the sensitive variable is fully included in the learning phase to have a prediction model depending on the sensitive variable, and then an imputed sensitive variable is used in the prediction phase. The aim of this paper is to evaluate this procedure by analyzing several benchmark datasets. We illustrate that using an imputed sensitive variable is helpful to improve prediction accuracies without hampering the degree of fairness much.